Introduction

This is part of a larger project on speech recognition we developed at ORT Braude college. The aim of the project is to activate programs on your desktop or panel by voice.

Motivation

We planned to make some common tasks that every user does on his/her computer (opening/ closing programs, editing texts, calculating) possible not only by mouse/ keyboard, but also by voice.

Background

Every speech recognition application consists of:

An engine that translates waves into text

A list of speech commands

Needless to say that as the grammar increases, the probability of misinterpretations grows. We tried to keep the grammar as small as possible without loosing information. The grammar format is explained latter.

Requirements

We need SAPI5 (ships with XP)

Microsoft Engine for English (if not found can be downloaded from Microsoft's site)

The easiest way to check if you have these is to enter your control panel-> speech. Here you should see the "Text to Speech" tab AND the "Speech recognition" tab. If you don't see the "Speech Recognition" tab then you should download it from the Microsoft site.

Some Technical Stuff

The program is designed to run from its source directory. So, while compiling and running it from the .NET, you will need to copy to the \bin\debug\ directory the agent file merlin.acs. I've included it in the demo in case you don't have it in your computer. You can still download from Microsoft's site any agent you want.

For the same reason, if you changed an XML file you will need to copy it also to the \bin\debug\ directory.

How to Start

The project's interface is shown bellow (Fig 1).

In order to start talking right away, you should do these two steps...

The first (and important) thing to do is adjust the microphone by clicking the right mouse button and choosing the "Mic training wizard."

The second (also important) thing to do is training the engine to your voice by choosing "User training wizard."

IMPORTANT: after these changes, you will need to make the program start listening again by clicking the right mouse button and choosing "Start listen." The more you train the engine, the better it will recognize your voice, although you will see an improvement from the first training. After the program is started, it may be in several "states". In every state, it recognizes a list of specific commands. The list of the commands that the program can identify is shown below.

A little explanation of the menu...

"Start listen"/"Stop listen"

To enable/disable the mic (it's switched according to what you choose), after disabling the label's becomes red (accuracy and state) indicating our state.

"Use agent"

Although the agent is used only for giving feedback, it could be useful to know if your command is heard or not. This is so even though you can disable it if you want or if you don't have an agent file (can be downloaded from Microsoft, ACS files) or if it is not working and you still want to use the recognition (there is no connection between the agent and the recognition). This also is being taken care of if the program didn't find the agent file or could not be loaded from any other reason.

"Add favorites"

In the "activate" state you can say the command "favorites programs" and open a form with your favorites programs and running them by saying the program name. This menu will open a form showing your favorites programs so you can add/delete or edit them as you want.

"Change character"

This will allow you changing the agent character (can download them from Microsoft site, ACS files).

"Change accuracy limit"

Every recognition accuracy is displayed in the "Accuracy" label. You can choose this menu and change the accuracy limit that you want the program to respond to the command that he hears with. You should do this to avoid responding to any voice or sound that he hears. you can raise this more every time that you train your computer and increase the recognition.

"Change user profile"

If the program is being used by several users, you can choose to give each user a profile and train the computer for each one (to add a user profile enter "control panel -> speech." Here you can only choose existing ones).

"Mic training wizard..."

This is very important (as I explained before) for the recognition. The first thing to do in every computer (only at the first time) is to activate this menu and setting up your mic or if you changed your mic to a new one.

"User training wizard..."

For a better recognition (notice that the training is for the selected user profile).

How it Works

The initial state is in the "deactivate" state, which means that the program is in a sleepy state... After the command "activate" you will wake up the program ("activate" state) and start recognizes other commands (Fig 2).

For example, use "start" to activate the start menu. Then you can say "programs" to enter the programs menu. From this point, you can navigate by saying "down"," up", "right"... "OK" according the commands list. You can also say "commands list" from any point to see a form with the list of the commands that you can say.

One of the important states in the program is the "menu" state, meaning that if a program is running (and focused) you can say "menu" to hook all menu items and start using them. For example, if you are running Notepad you could open new file by saying "menu"->"File"->"new file". Every time that you hook menu, you can see how many menus the program hooked so you can start using them as commands. I had a little problem with some menus like "Word" and "Excel" that I couldn't hook, but... I'll check it later.

Another nice state is "Numeric state". For example, say the commands "favorites programs","calculator","enter numeric state", "one","plus","two","equal" and see the result. Alternatively, you can open a site in "Alphabetic state". For example, say the commands "favorites programs","internet explorer","enter alphabetic state", "menu","down","down","O K", "enter alphabetic state","c","o","d","e",...,"dot","c","o","m" and see the result.

Getting Help

One of the main problems with the voice activated systems is what happens if you don't know exactly which commands the computer expects. No problem! If you are unable to proceed just say "commands list " and the program will show you what are the available commands from here. States (commands) available in the program:

deactivate

close speech recognition

about speech recognition

close | hide

activate

deactivate

up

down

right

left

enter | run | ok

escape | cancel

tab

menu | alt

All "activate" state + menu items

start

deactivate

up

down

right

left

enter | run | ok

escape

tab

commands list

programs

documents

settings

search

help

run

commands list

close | hide

page up

page down

close

favorites | favorites programs

close | hide

A program name from the list

switch program

tab | right

shift tab | left

enter | ok

escape | cancel

press key

release | stop

up

down

right

left

shut down

right | tab

left | shift tab

escape | cancel

enter | ok

page up

page down

yes

no

enter numeric state

exit numeric state

back | back space

plus

minus

mul | multiply

div | divide

equal

Numbers from 0 - 9

enter alphabetic state

exit alphabetic state

back space

enter

at ("@")

underline ("_")

dash ("-")

dot (".")

back slash ("/")

Letters from A to Z

Code Explanation

The first thing to do is to add reference to the file... C:\Program Files\Common Files\Microsoft Shared\Speech\SAPI.dll so we can use the Speech Library by writing...

using SpeechLib;

When we activate the engine, the initialization step takes place. There are mainly 3 objects involved:

An SpSharedRecoContext that starts the recognition process (must be shared so it can apply to all processes). It implements an ISpeechRecoContext interface. After this object is created, we add the events we are interested in (in our case AudioLevel and Recognition)

A static grammar object that can be loaded from XML file or programmatically implements ISpeechRecoGrammar the list of static recognizable words is shown in Fig 2 and attached for downloading dynamic grammar that lets adding rules implement ISpeechGrammarRule;. The rule has two main parts:

Hooking Menus

When a program is activated, by saying "Menu" its menu is hooked and its commands added to the dynamic grammar. We used some unmanaged functions which we imported from user32.dll. The program also hooks the accelerators that are associated with each menu (that have an & sign before them). The command is simulated with function keybd_event and executed.

Points of Interest

We used the MSAgent, but in our case it has a passive role (gives feedback that the command is heard).

There exists an accuracy option. The user can establish a threshold so he can filter unsure recognitions.

In the future, we plan to make more applications "voice friendly."

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.