Hi,
I am trying to recognise a short command-sentence in Swiss-german language.
The command consists of 4 words only. The goal is to recognize only this command (and nothing else) in my iOS-App.

I have OpenEars running doing the following:

1.) Use OpenEar’s “generateGrammar” method together with a “words”-array and the dict-setting “[OneOfTheseWillBeSaidOnce : words]”

2. Use the “AcousticModelEnglish” and set the “words” array with ‘fictive words”, such that if these words are spoken in English they will sound like the Swiss-german command-sentence. There are approx 20 Words and approx. 10 Sentences provided inside the “words” array. (for all applies: if spoken in English they sound like the Swiss-german command sentence)

3. Alternatively the same thing for the “AcousticModelSpanish” (with a different “words” array

4. Since yesterday, I tried also the Rejecto-Plugin (using its Rejecto-method “generateRejectingLanguageModel” instead of “generateGrammer”)…

But for all of the above variations, there is a problem: Too much is recognized.
Which means, the Swiss-german command-sentence is recognized at 100% but unfortunately many many other words and sentences are recognized as well. (i.e. the recognition is not specific enough !).

What could I do to improve the recognition specificity for this Swiss-german command-sentence ????

Let’s just troubleshoot one case if that’s OK – two languages and two different vocabulary structures might get a little confusing. Would you rather troubleshoot your grammar, or the Rejecto model? BTW, maybe this goal would be a better match for the German acoustic model.

I was using only one Accoustic-model at the time (but happen to try English and Spanish so far). And no – there is no reason (anymore) to not also try with the German-Accoustic model – in fact I just did 5 Minutes ago. (we did have other products evaluated in EnglishOrSpanish and therefore the choice of EnglishOrSpanish so far…)

About grammar vs. Rejecto: You tell me what suits better for this kind of problem ??? (i.e. recognising a Swiss-german short sentence with highest specificity)

–> Rejecto’s doc sais: “Rejecto makes sure that your speech app does not attempt to recognize words which are not part of your vocabulary. This lets your app stick to listening for just the words it knows, and that makes your users happy.”

Therefore both seem to be necessary somehow – what choice do you suggest. Or can you even use both at the same time ??

–> Rejecto’s doc sais: “Rejecto makes sure that your speech app does not attempt to recognize words which are not part of your vocabulary. This lets your app stick to listening for just the words it knows, and that makes your users happy.”

Yes, the intention of this documentation is to clarify that a grammar can listen for a multi-word phrase in exclusive terms (i.e. it won’t attempt to evaluate statistical nearness to your phrase but just try to hear it when complete, not hear it when not complete) and Rejecto will reject unknown words from a model made up of words. So if the goal is a sentence, a grammar is probably the right choice. If you were looking for one of several words by themselves, or phrases where you didn’t mind possible partial recognition of the phrase contents, Rejecto would be better.

The thing is that the Swiss-german short sentence only consists of 4 words. And in 99% of the time they are spoken in such a fast manner that is sounds like ONE long word. There is some background-noise (imagine a train-station-kind-of background-noise).

What do you suggest for this use-case ? (grammar vs. Rejecto) ? You are the expert :)

Sorry, it’s difficult for me to imagine this case without an example. Can you share with me something similar enough so I can understand how a four-word phrase can sound like a single word when spoken?

Got it, thank you for explaining. OK, let’s first try and see whether the simplest thing works, and then we can explore other ways if it doesn’t. Let’s start with using a grammar with the German acoustic model and see how far we get. What are your results when you do that? You grammar should have the whole sentence as a single string as a single array item in your dictionary, not an array of individual words.

This always has to be tested out on your end of things for your usage case (doing this will also help with your background noise issue, with luck). I think there is a note about it at the bottom of the acoustic model page with more info.

Same issue: The sentence is recognized 100% – but many more sentences having a similar lenght. For example I speak the sentence with the last (of the four) words replace by another word – and the grammar method still recognizes the sentence, unfortunately!

OK, can you show me the shortest possible code excerpt (i.e. just the part where you create the grammar and start listening for it, making absolutely sure that none of your other troubleshooting attempts are still also in effect), replacing the four words with a different four words if you need to for confidentiality?

OK, I’m a bit worried that there is cross-contamination with your multiple troubleshooting approaches in the code you are working with and showing me (I’ve touched on this concern a couple of times in this discussion so far because it is a common situation after the developer has tried many things), so I think I’d like to try to rule that out before we continue.

Could you put away your existing work and zip it into an archive somewhere for the time being, and then make an entirely new project with the tutorial only using the approach we’ve chosen here (a grammar using stock OpenEars and the German acoustic model), and do it with four words from the start where you would be comfortable sharing all the vocabulary and listening initialization code with me and 100% of the logging output? Then we can continue without worrying about replacing words or old code hanging around. Thanks!

Here is the link to the the test-project you asked for (an entirely new project with the tutorial only using the approach we’ve chosen here (a grammar using stock OpenEars and the German acoustic model)):

Hi, sorry, I really don’t run projects locally for this kind of troubleshooting – I can only ask for your logging output and code excerpts as I have time to assist. I asked you to make a new project so we could be sure we were not mixing your old approaches because your old code was entering into the troubleshooting process, but I apologize if this was confusing about the question of whether I would be running the test project locally and debugging it myself.

And also, I am not asking you for debugging. I am asking you on how to apply the view settings OpenEars offers to get the recognition-success-rate (and specificity!) to an order where it is acceptable for production.

So – is there anymore settings I can do in order to improve specificity for our one-sentece words-array ?

I’d also like to take this opportunity to remind that even when we start discussing Rejecto- or RuleORama-related issues, the license of the plugins does not allow them to be redistributed anywhere, so make sure not to post the demos or licensed versions of those products anywhere like Github or anywhere else which enables sharing.

Super, we can test a couple of things now that we’ve level-set. First question: would it be possible for you to use RuleORama, or is Rejecto the only plugin you would like to test with? I’m asking because the easiest first step is to see if your results are better with RuleORama because you already have a working grammar, but if you don’t want to use it, we can skip it and try something with Rejecto.

Halle,
can we please continue with Rejecto. I relaize the RuleORama-demo is again not useful after download – and I feel that I loose trememdeous amount of time just to set up these demo-projects. Also, your manual contains ObjC-Code under the Swift3 chapter – which is not something pleasant either. Can you please provide me with a working RuleORama-demo (Swift4) or we continue with Rejecto. Let me know, ok?

With RuleORama, I end up with the following error (see log below). Can you please tell me what is still wrong ? And also, how does the words-array now need to look like. It seems that RuleORama wants a different one ???

Here is the RuleORama-log: (..still not sure if I translated all ObjC-Code from the manual correctly to Swift)….

2018-04-23 14:08:20.169350+0300 TestOpenEars[4109:2062039] Error: Error Domain=com.politepix.openears Code=6000 "Language model has no content." UserInfo={NSLocalizedDescription=Language model has no content.}
2018-04-23 14:08:20.170097+0300 TestOpenEars[4109:2062039] It wasn't possible to create this grammar: {
OneOfTheseWillBeSaidOnce = (
"esch do no frey"
);
}
Error while creating initial language model: Optional(Error Domain=LanguageModelErrorDomain Code=10040 "It wasn't possible to generate a grammar for this dictionary, please turn on OELogging for more information" UserInfo={NSLocalizedDescription=It wasn't possible to generate a grammar for this dictionary, please turn on OELogging for more information})
2018-04-23 14:08:20.170989+0300 TestOpenEars[4109:2062039] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
2018-04-23 14:08:20.171130+0300 TestOpenEars[4109:2062039] Creating shared instance of OEPocketsphinxController
2018-04-23 14:08:20.177706+0300 TestOpenEars[4109:2062039] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2018-04-23 14:08:20.177741+0300 TestOpenEars[4109:2062039] Error: you have invoked the method:
startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF
with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this grammar, that means the correct path to your grammar that you should pass to this method's languageModelPath argument is as follows:
NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"TheNameIChoseForMyVocabulary"];
Feel free to copy and paste this code for your path to your grammar, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your grammar or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.

Additionally, I
– inserted the RuleORama-Framework
– set Other Linker Flags to -ObjC
– set the Bridging-Header Path correctly

Here is the Code:

import UIKit
class ViewController: UIViewController, OEEventsObserverDelegate {
var openEarsEventsObserver = OEEventsObserver()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
self.openEarsEventsObserver.delegate = self
let lmGenerator = OELanguageModelGenerator()
let accusticModelName = "AcousticModelGerman"
let fileName = "GermanModel"
let words = ["esch do no frey"]
// let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
// let err: Error! = lmGenerator.generateGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
let err: Error! = lmGenerator.generateFastGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
var lmPath = ""
var dictPath = ""
if(err != nil) {
print("Error while creating initial language model: \(err)")
} else {
lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
lmPath = lmGenerator.pathToSuccessfullyGeneratedRuleORamaRuleset(withRequestedName: fileName)
}
// ************* Necessary for logging **************************
OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
// ************* Necessary for logging **************************
do {
try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
} catch {
print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
}
OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
}
// An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
// This might be useful in debugging a conflict between another sound class and Pocketsphinx.
func pocketsphinxRecognitionLoopDidStart() {
print("Local callback: Pocketsphinx started.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
func pocketsphinxDidStartListening() {
print("Local callback: Pocketsphinx is now listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
func pocketsphinxDidDetectSpeech() {
print("Local callback: Pocketsphinx has detected speech.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
func pocketsphinxDidDetectFinishedSpeech() {
print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
// likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
func pocketsphinxDidStopListening() {
print("Local callback: Pocketsphinx has stopped listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
// Going to react to speech until listening is resumed. This can happen as a result of Flite speech being
// in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
func pocketsphinxDidSuspendRecognition() {
print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
// having been suspended it is now resuming. This can happen as a result of Flite speech completing
// on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
func pocketsphinxDidResumeRecognition() {
print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
}
// An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
// recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
}
// An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
// complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
func fliteDidStartSpeaking() {
print("Local callback: Flite has started speaking") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
// complex interaction between sound classes.
func fliteDidFinishSpeaking() {
print("Local callback: Flite has finished speaking") // Log it.
}
func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
}
func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
}
/** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
func pocketsphinxFailedNoMicPermissions() {
print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
}
/** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result (will only be returned on iOS7 or later).*/
func micPermissionCheckCompleted(withResult: Bool) {
print("Local callback: mic check completed.")
}
}

• it is of course completely fine if you don’t want to use RuleORama, which is the reason I asked first if it was OK for you. This is not an upsell – my question was because there is no easier time to try it out then right after you have set up a grammar, and if you wanted to hear all the options, this was the most convenient moment to explore that one and any other timing will be less convenient because we will be changing from a grammar to a language model afterwards. My intention was to explain to you how to add it to your existing project if you agreed to try it out. It is fine with me either to skip it or to take time to get it working.

• This is too unconstructive for me while I’m working hard to give you fast and helpful support for an unsupported language, and I’d appreciate it if you’d consider that we both have stresses in this process: “I relaize the RuleORama-demo is again not useful after download – and I feel that I loose trememdeous amount of time just to set up these demo-projects. Also, your manual contains ObjC-Code under the Swift3 chapter – which is not something pleasant either.” I disagree with you about the origin of the issues in this case, but more importantly, I just don’t want to proceed with this tone, which also seemed to come up due to my trying hard to remove all the unknown variables from our troubleshooting process, and I’m likely to choose to close the question if it is an ongoing thing even though we’ve both invested time in it. You don’t have to agree, but I don’t want you to be surprised if I close this discussion for this reason.

• I want to warn you here that it is possible there is no great solution because this is an unsupported language, so that you have enough info to know whether to invest more time. I am happy to help, and I have some theories about how we might be able to make this work, but not every question has a perfect answer.

That all said, I just noticed from your RuleORama install that there is something we need to fix in both installs, which is that in both cases the logging is being called too late to log the results of generating the language model or grammar. Can you move these:

OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true

To run right after super.viewDidLoad() and share the logging output from both projects?

I am sorry about my tone during RuleORama-demo-trials today – I felt a bit stressed out since things did not fit immediately :/ I do appreciate your help !

As requested, I did move the log-code accordingly (i..e after viewDidLoad).

Please see the following two forum-entries for the two Logs

(1) : Done by Rejecto
–> There are 5 times I spoke, the two first times are spoken correctly (i.e. our words-array) – the thrid, fourth and fitht time is spoken incorrectly (but unfortunately still recognized by Rejecto)

(2) : Done by RuleORama
–> Still, there is a bug in code as can be read in the log…

Cool, thank you. Do you have a log for your earlier project that is just OpenEars using a grammar (without Rejecto), with the logging calls moved to the top? I thought that was the main file we had starting with debugging above, and then we were going to quickly try out RuleORama if you wanted. Those two grammar-using projects are the ones I’m curious about right now, because it looks like there is a flaw in the grammar and I want to know if the same error is happening in both.

No problem, just keep in mind that I asked for that project to have a clean slate to work from without mixing up code from multiple approaches, so we want to get back to that state of clarity and simplicity.

Uncommenting whichever of the generateGrammar/generateFastGrammar lines are to be used by the respective grammar project.

For your Rejecto project, please open AcousticModelGerman.bundle/LanguageModelGeneratorLookupList.text at whatever location you are really linking to it (please be ABSOLUTELY sure you are performing this change on the real acoustic model that your project links to and moves into your app bundle, wherever that is located, or our troubleshooting work on this project will be guaranteed to be unsuccessful) and look for the following two lines:

es ee s
esf ee s f

and change them to this:

es ee s
eschdonofrey @ ss d oo n oo f r @ ii
esf ee s f

and then you have to change your Rejecto language model generation code (which you have never shown me here) so that it just creates a model for the single word “eschdonofrey”. Do not make this change to your grammar projects. For contrast, you can also try changing the acoustic model entry to this instead with your Rejecto project, with slightly different phonemes:

es ee s
eschdonofrey ee ss d oo n oo f r ee ii
esf ee s f

If none of these have better results, this will be the point at which we will have to stop the troubleshooting process, because it is guaranteed to get confused results if we try to further troubleshoot three different implementations in parallel which have hosted other different implementations at different times. If one of these projects has improved results, we can do a little bit more investigation of it, under the condition that the other two projects are put away and it is possible for me to rely on the fact that we are only investigating one clean project at a time moving forward. Let me know how it goes!

They are being marked as spam due to the multiple external links. Please keep all the discussion in here so it is a useful resource to other people with the same issue. I recommend doing this without all the confusion and complexity by returning to the premise of troubleshooting exactly one case at a time. You can choose which to begin with.

Please put all your documentation of what is going on in this forum, thank you. The Github repo will change or disappear (it has already disappeared and then returned with different content in the course of just this discussion, so there is a previous link to it which is already out of date) and as a consequence make this discussion no use for anyone who has a similar issue to any of the many questions you are asking for information about.

Ok – I just feel that the log and VC’s Code make the forum trememdous. And links would be nicer somehow. I can put it into a new github-reop if you prefer. Or I can paste the huge logs in this forum. What do you prefer ?

Paste the logs and VC contents in this forum, thank you. There are many other discussions here with big logs and they provide a way for searchers to get hits for specific errors and problems they are troubleshooting without my having to answer the same questions many times, as well as for me to be able to go back and find either bugs or points of confusion. When that is all hidden away in a repo they will eventually disappear as the repo changes or is removed, or cause the support request to occur in that repo, and won’t help anyone solve their problems where it is possible for them to follow on with a “I got the same log result but your fix isn’t affecting my case”. It’s a very important part of there being visibility for solutions.

If you want to collapse the logs so that they aren’t as visually big, you can put spoiler tags around them:
&lbrack;spoiler&rbrack;
&lbrack;/spoiler&rbrack;
this will make them possible to open and close so they don’t take up vertical space if it bothers you.

import UIKit
class ViewController: UIViewController, OEEventsObserverDelegate {
var openEarsEventsObserver = OEEventsObserver()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
// ************* Necessary for logging **************************
OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
// ************* Necessary for logging **************************
self.openEarsEventsObserver.delegate = self
let lmGenerator = OELanguageModelGenerator()
let accusticModelName = "AcousticModelGerman"
let fileName = "GermanModel"
// let words = ["esch do no frey"]
let words = ["eschdonofrey"]
// let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: true, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
var lmPath = ""
var dictPath = ""
if(err != nil) {
print("Error while creating initial language model: \(err)")
} else {
// lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
}
do {
try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
} catch {
print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
}
OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
}
// An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
// This might be useful in debugging a conflict between another sound class and Pocketsphinx.
func pocketsphinxRecognitionLoopDidStart() {
print("Local callback: Pocketsphinx started.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
func pocketsphinxDidStartListening() {
print("Local callback: Pocketsphinx is now listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
func pocketsphinxDidDetectSpeech() {
print("Local callback: Pocketsphinx has detected speech.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
func pocketsphinxDidDetectFinishedSpeech() {
print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
// likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
func pocketsphinxDidStopListening() {
print("Local callback: Pocketsphinx has stopped listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
// Going to react to speech until listening is resumed. This can happen as a result of Flite speech being
// in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
func pocketsphinxDidSuspendRecognition() {
print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
// having been suspended it is now resuming. This can happen as a result of Flite speech completing
// on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
func pocketsphinxDidResumeRecognition() {
print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
}
// An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
// recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
}
// An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
// complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
func fliteDidStartSpeaking() {
print("Local callback: Flite has started speaking") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
// complex interaction between sound classes.
func fliteDidFinishSpeaking() {
print("Local callback: Flite has finished speaking") // Log it.
}
func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
}
func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
}
/** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
func pocketsphinxFailedNoMicPermissions() {
print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
}
/** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result (will only be returned on iOS7 or later).*/
func micPermissionCheckCompleted(withResult: Bool) {
print("Local callback: mic check completed.")
}
}

Rejecto Language Model creation as it looks right now – still giving an error on startup…

import UIKit
class ViewController: UIViewController, OEEventsObserverDelegate {
var openEarsEventsObserver = OEEventsObserver()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
// ************* Necessary for logging **************************
OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
// ************* Necessary for logging **************************
self.openEarsEventsObserver.delegate = self
let lmGenerator = OELanguageModelGenerator()
let accusticModelName = "AcousticModelGerman"
let fileName = "GermanModel"
// let words = ["esch do no frey"]
let words = ["eschdonofrey"]
// let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
var lmPath = ""
var dictPath = ""
if(err != nil) {
print("Error while creating initial language model: \(err)")
} else {
// lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
}
do {
try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
} catch {
print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
}
OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
}
// An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
// This might be useful in debugging a conflict between another sound class and Pocketsphinx.
func pocketsphinxRecognitionLoopDidStart() {
print("Local callback: Pocketsphinx started.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
func pocketsphinxDidStartListening() {
print("Local callback: Pocketsphinx is now listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
func pocketsphinxDidDetectSpeech() {
print("Local callback: Pocketsphinx has detected speech.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
func pocketsphinxDidDetectFinishedSpeech() {
print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
// likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
func pocketsphinxDidStopListening() {
print("Local callback: Pocketsphinx has stopped listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
// Going to react to speech until listening is resumed. This can happen as a result of Flite speech being
// in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
func pocketsphinxDidSuspendRecognition() {
print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
// having been suspended it is now resuming. This can happen as a result of Flite speech completing
// on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
func pocketsphinxDidResumeRecognition() {
print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
}
// An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
// recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
}
// An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
// complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
func fliteDidStartSpeaking() {
print("Local callback: Flite has started speaking") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
// complex interaction between sound classes.
func fliteDidFinishSpeaking() {
print("Local callback: Flite has finished speaking") // Log it.
}
func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
}
func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
}
/** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
func pocketsphinxFailedNoMicPermissions() {
print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
}
/** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result (will only be returned on iOS7 or later).*/
func micPermissionCheckCompleted(withResult: Bool) {
print("Local callback: mic check completed.")
}
}

import UIKit
class ViewController: UIViewController, OEEventsObserverDelegate {
var openEarsEventsObserver = OEEventsObserver()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
// ************* Necessary for logging **************************
OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
// ************* Necessary for logging **************************
self.openEarsEventsObserver.delegate = self
let lmGenerator = OELanguageModelGenerator()
let accusticModelName = "AcousticModelGerman"
let fileName = "GermanModel"
let words = ["esch do no frey"]
let grammar = [
ThisWillBeSaidOnce : [
[ OneOfTheseWillBeSaidOnce : words]
]
]
// let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
// let err: Error! = lmGenerator.generateGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
let err: Error! = lmGenerator.generateFastGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
var lmPath = ""
var dictPath = ""
if(err != nil) {
print("Error while creating initial language model: \(err)")
} else {
// lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
lmPath = lmGenerator.pathToSuccessfullyGeneratedRuleORamaRuleset(withRequestedName: fileName)
dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
}
do {
try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
} catch {
print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
}
OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
}
// An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
// This might be useful in debugging a conflict between another sound class and Pocketsphinx.
func pocketsphinxRecognitionLoopDidStart() {
print("Local callback: Pocketsphinx started.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
func pocketsphinxDidStartListening() {
print("Local callback: Pocketsphinx is now listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
func pocketsphinxDidDetectSpeech() {
print("Local callback: Pocketsphinx has detected speech.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
func pocketsphinxDidDetectFinishedSpeech() {
print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
// likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
func pocketsphinxDidStopListening() {
print("Local callback: Pocketsphinx has stopped listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
// Going to react to speech until listening is resumed. This can happen as a result of Flite speech being
// in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
func pocketsphinxDidSuspendRecognition() {
print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
// having been suspended it is now resuming. This can happen as a result of Flite speech completing
// on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
func pocketsphinxDidResumeRecognition() {
print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
}
// An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
// recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
}
// An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
// complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
func fliteDidStartSpeaking() {
print("Local callback: Flite has started speaking") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
// complex interaction between sound classes.
func fliteDidFinishSpeaking() {
print("Local callback: Flite has finished speaking") // Log it.
}
func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
}
func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
}
/** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
func pocketsphinxFailedNoMicPermissions() {
print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
}
/** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result (will only be returned on iOS7 or later).*/
func micPermissionCheckCompleted(withResult: Bool) {
print("Local callback: mic check completed.")
}
}

2018-04-26 17:02:12.467180+0200 TestOpenEars[1005:277279] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
2018-04-26 17:02:12.468228+0200 TestOpenEars[1005:277279] Creating shared instance of OEPocketsphinxController
2018-04-26 17:02:12.484721+0200 TestOpenEars[1005:277279] RuleORama version 2.502000
2018-04-26 17:02:12.498086+0200 TestOpenEars[1005:277279] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelGerman
2018-04-26 17:02:12.502769+0200 TestOpenEars[1005:277279] Since there is no cached version, loading the g2p model for the acoustic model called AcousticModelGerman
2018-04-26 17:02:12.535249+0200 TestOpenEars[1005:277279] The word do was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
2018-04-26 17:02:12.535454+0200 TestOpenEars[1005:277279] the graphemes "d oo" were created for the word do using the fallback method.
2018-04-26 17:02:12.538121+0200 TestOpenEars[1005:277279] The word esch was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
2018-04-26 17:02:12.538570+0200 TestOpenEars[1005:277279] the graphemes "@ ss" were created for the word esch using the fallback method.
2018-04-26 17:02:12.541126+0200 TestOpenEars[1005:277279] The word frey was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
2018-04-26 17:02:12.541368+0200 TestOpenEars[1005:277279] the graphemes "f r @ ii" were created for the word frey using the fallback method.
2018-04-26 17:02:12.543943+0200 TestOpenEars[1005:277279] The word no was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
2018-04-26 17:02:12.544061+0200 TestOpenEars[1005:277279] the graphemes "n oo" were created for the word no using the fallback method.
2018-04-26 17:02:12.544117+0200 TestOpenEars[1005:277279] I'm done running performDictionaryLookup and it took 0.041384 seconds
2018-04-26 17:02:12.564690+0200 TestOpenEars[1005:277279] Starting dynamic language model generation
INFO: ngram_model_arpa_legacy.c(504): ngrams 1=3, 2=0, 3=0
INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
INFO: ngram_model_arpa_legacy.c(543): 3 = #unigrams created
INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
INFO: ngram_model_dmp_legacy.c(551): 3 = #unigrams created
2018-04-26 17:02:12.589480+0200 TestOpenEars[1005:277279] Done creating language model with CMUCLMTK in 0.024742 seconds.
2018-04-26 17:02:12.592952+0200 TestOpenEars[1005:277279] Generating fast grammar took 0.095226 seconds
INFO: ngram_model_trie.c(424): Trying to read LM in bin format
INFO: ngram_model_trie.c(457): Header doesn't match
INFO: ngram_model_trie.c(180): Trying to read LM in arpa format
INFO: ngram_model_trie.c(218): LM of order 1
INFO: ngram_model_trie.c(220): #1-grams: 3
2018-04-26 17:02:12.596542+0200 TestOpenEars[1005:277279] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2018-04-26 17:02:12.596608+0200 TestOpenEars[1005:277279] Error: you have invoked the method:
startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF
with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this grammar, that means the correct path to your grammar that you should pass to this method's languageModelPath argument is as follows:
NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"TheNameIChoseForMyVocabulary"];
Feel free to copy and paste this code for your path to your grammar, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your grammar or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.

Regarding your Rejecto results: you can now pick whichever one of them is better and experiment with raising or reducing the value withWeight in this line (lowest possible value is 0.1 and largest possible value is 1.9):

What does the symbol “@” represent in the LookupList.text ? (the double-ee’s and double-ii’s I can somehow intereprete but what does “@” really mean ?)

It represents the phone sound in Hochdeutsch which is represented by the IPA ə. This is getting outside of the things I support here but there should be enough info in that explanation for you to find sources outside of these forums to continue your investigation if you continue to have questions.

Thank you for the explanation – I will investigate in the “0.1 < withWeight < 1.9” parameter as well as play with the two LookupList.text suggestions (maybe even play with that one also a little bit to understand its effects…)….

Also, one thing I don’t understand is that it often times responds with having recognized the “sentence at question” several times. As can be seen in this log-excert:

2018-04-26 17:12:25.815209+0200 TestOpenEars[1012:281905] Pocketsphinx heard "esch do no frey esch do no frey esch do no frey esch do no frey esch do no frey" with a score of (-130134) and an utterance ID of 19.
Local callback: The received hypothesis is esch do no frey esch do no frey esch do no frey esch do no frey esch do no frey with a score of -130134 and an ID of 19

What could this be ? i.e. Why does the hypothesis contain our “sentence at question” this many times ??

For both solutions (Rejecto or RuleORama) my question to you: Are you able to interpret the logs in order to tune one or the other solution even more ? I am completely lost in what to tune here since the logs look very cryptic to me.

If yes, what examples should I place ? (positive, false-positive or false negative ones?

Yeah, that makes a certain amount of sense because this use case is very borderline for RuleORama – it isn’t usually great with a rule that has a single entry and the other elements of this which are pushing the limits of what is likely to work are probably making it worse. We can shelve the investigation of RuleORama now that we have seen a result from it.

I’ve recommended what is possible to tune for Rejecto, there is nothing else. If it isn’t doing well yet, this is likely to just be due to it being a different language. You can also tune vadThreshold but I recommended doing that at the start so I am assuming it is correct now.

With Rejecto: What I don’t understand why the ending of my “sentence at question” does not seem to matter at all. i.e. if I speak “eschdonofrey” or if I speak “eschdonoAnything” makes no difference, the sentence is still recognized (which leads to so many false-negatives !

Do you have any idea on how to improve the ending-problem of my sentence at question ?

This is because there are multiple things about this which are a problem for ideal recognition with these tools: it has high uncertainty because it is a different language, and language models aren’t designed to work with a single word. I expect changing the weight to affect this outcome, but if it doesn’t, that is the answer on whether this approach will work.

import UIKit
class ViewController: UIViewController, OEEventsObserverDelegate {
var openEarsEventsObserver = OEEventsObserver()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
// ************* Necessary for logging **************************
OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
// ************* Necessary for logging **************************
self.openEarsEventsObserver.delegate = self
let lmGenerator = OELanguageModelGenerator()
let accusticModelName = "AcousticModelGerman"
let fileName = "GermanModel"
let words = ["esch do no frey"]
let grammar = [
ThisWillBeSaidOnce : [
[ OneOfTheseWillBeSaidOnce : words]
]
]
// let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
let err: Error! = lmGenerator.generateGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
var lmPath = ""
var dictPath = ""
if(err != nil) {
print("Error while creating initial language model: \(err)")
} else {
lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
}
do {
try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
} catch {
print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
}
OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
}
// An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
// This might be useful in debugging a conflict between another sound class and Pocketsphinx.
func pocketsphinxRecognitionLoopDidStart() {
print("Local callback: Pocketsphinx started.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
func pocketsphinxDidStartListening() {
print("Local callback: Pocketsphinx is now listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
func pocketsphinxDidDetectSpeech() {
print("Local callback: Pocketsphinx has detected speech.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
func pocketsphinxDidDetectFinishedSpeech() {
print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
// likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
func pocketsphinxDidStopListening() {
print("Local callback: Pocketsphinx has stopped listening.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
// Going to react to speech until listening is resumed. This can happen as a result of Flite speech being
// in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
func pocketsphinxDidSuspendRecognition() {
print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
// having been suspended it is now resuming. This can happen as a result of Flite speech completing
// on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
// or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
func pocketsphinxDidResumeRecognition() {
print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
}
// An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
// recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
}
// An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
// complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
func fliteDidStartSpeaking() {
print("Local callback: Flite has started speaking") // Log it.
}
// An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
// complex interaction between sound classes.
func fliteDidFinishSpeaking() {
print("Local callback: Flite has finished speaking") // Log it.
}
func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
}
func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
}
/** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
func pocketsphinxFailedNoMicPermissions() {
print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
}
/** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result (will only be returned on iOS7 or later).*/
func micPermissionCheckCompleted(withResult: Bool) {
print("Local callback: mic check completed.")
}
}

I think that if your original grammar implementation doesn’t raise any errors and is returning output that you can evaluate, we have explored everything that is within the realm of supportable troubleshooting here, so I am going to close this as answered because I think we have explored all of the topics which have come up here at substantial length and I think there should be enough for you to examine further outside of an ongoing troubleshooting process with me.

If you have very specific questions later on (I mean, questions about a single aspect of a single implementation with a single acoustic model) it’s OK to start very focused new topics, just please create a fresh implementation you are comfortable sharing things about and you are sure doesn’t have accumulated code from different implementations, and remember to share the info in here without prompting from me so the questions don’t get closed, thanks and good luck!

NeatSpeech is a plugin for OpenEars™ that lets it do fast, high-quality offline speech synthesis which is compatible with iOS6.1, and even lets you edit the pronunciations of words! Try out the NeatSpeech demo free of charge.

OpenEars® is a registered trademark of PolitepixAllHours® is a registered trademark of PolitepixThe Politepix site uses cookies in order to understand how the website is used by visitors and in order to enable some required functionality. You can learn all about which cookies we use on the About page, as well as everything about our privacy policy.TWITTER | CONTACT POLITEPIX | IMPRESSUM | ABOUT | LEGAL | IMPRINT