…that when it is listening, it will bring back utterances that aren’t specifically in this dictionary like “BACK”, “CURRENT”, “NAV NOTES”, “NAV ITEM NO OUT”

Is there a way to only have utterances come back that only are in the dictionary and the ones in the dictionary are weighted higher than ones that aren’t? In my case it will bring back an utterance like “BACK” when I say “CHECK”, but I don’t have “BACK” in the dictionary by itself, I specifically added “GO BACK” to try to solve it from making the wrong choice.

Yes, take a look in the docs for information about grammars in OpenEars (versus language models, which you are using above) and after looking into that and trying it out, you may possible also want to investigate the use of RuleORama in case you need it in realtime.

Thanks Halle. So I am now using the grammar instead of language model and when I now say the phrases in the dictionary it is way more accurate than before which is great. One thing I am seeing though is because there is no Rejecto it seems to be overly aggressive in returning something in the dictionary even if what is being said is not even close to the phrases. Even when I say a single syllable, single word it will bring back phrases. Is there any strategies to mimic what something like Rejecto does or maybe I am missing something that can make things being said not trip it always wanting to bring back an utterance?

Yes. I put in slider in my app where at anytime I change it from 1 to 5 in 0.1 increments. When I get to 4.2 or above it doesn’t recognize most words. At 4 it usually recognizes words but it still trips up a lot when saying words where there definitely are no phrases anywhere near what is being said, .

OK, that’s surprising, but vadThreshold would be the available way to address this. If the utterances you are using in the grammar are particularly short, you may wish to make them a bit longer so they are more distinct from each other and less easily substituted for other utterances.

OK, I have done that. Still very weird behavior. I am testing this in pretty ideal conditions with no background noise and a very good headset and noise cancelling mic. When I say the words that appear in the dictionary it works flawlessly. When I saw am just saying other long phrases, it trips almost every time.

“THAT IS NOT VERY GOOD” and it came back with “NEGATIVE”
“THIS IS A TEST” and it came back with “MIRA SECURING”
“THIS IS A TEST” and it came back with “MIRA SPEEDS”
“THERE REALLY IS SOMETHING WRONG” and it came back with “MIRA RADIO-OUT”
“THAT IS WEIRD” and it came back with “READ ITEM”

Since there is no rejecto for grammar, are there any strategies that could be done to simulate rejecto. One thing I just tried was I added every letter of the alphabet to the dictionary, and now almost anytime time it hears anything it picks one of those unless I say a phrase specifically in the dictionary which definitely is helping a lot.

Still confused why using a grammar it seems like it always wants to match with something in the dictionary. When I say almost anything it is always coming back now with one of the letters. It seems like there would be something that if in the time it started recognizing to the silence delay if there was what was clearly many syllables and words being said that it wouldn’t match to something with one syllable.

when saying, “I didn’t say that” I get “HEY THERE”. Often when I pronounce “Hello People” I get “HEY THERE”. I understand that these sound similar. Is there a way to get the probability for detection so I can filter out hypotheses with low credibility? Or is probability unavailable in JSGF mode?

2018-01-03 18:40:34.133792+0200 OpenEarsTest[2672:954627] Pocketsphinx heard “HEY THERE” with a score of (0) and an utterance ID of 4.
2018-01-03 18:40:34.138785+0200 OpenEarsTest[2672:954570] The received hypothesis is HEY THERE with a score of 0 and an ID of 4

Thanks for the logging. This is a bit unusual in my experience so I’m trying to pin down whether there are any contributing factors, pardon my questions. How close is your implementation to the sample app which ships with the distribution? Do you get the same results when just altering the sample app to support this grammar? Is there anything about the environment (or I guess even the speaker) which could contribute to the results here?

Thank You for the reply.
Everything is standard as in example, except, for this log i changed mode to grammar and supplied several phrases I would like it to recognize (as in top of the log). XCode 9.2, iPhone 5s. I am not native English speaker but I speak it not bad. And the point of the experiment is to only fire when the proper phrase is said, in proper English.

NeatSpeech is a plugin for OpenEars™ that lets it do fast, high-quality offline speech synthesis which is compatible with iOS6.1, and even lets you edit the pronunciations of words! Try out the NeatSpeech demo free of charge.

OpenEars® is a registered trademark of PolitepixAllHours® is a registered trademark of PolitepixThe Politepix site uses cookies in order to understand how the website is used by visitors and in order to enable some required functionality. You can learn all about which cookies we use on the About page, as well as everything about our privacy policy.TWITTER | CONTACT POLITEPIX | IMPRESSUM | ABOUT | LEGAL | IMPRINT