Loads the topic, exports and compiles the corresponding context files so that they are ready to be used by the speech recognition engine (ASR).
The name of the topic (defined in the first line of the specified file) is returned by the function, so that it can be used elsewhere in the code.

Adds the specified topic to the list of the topics that are currently used by the dialog engine to parse the human’s inputs.
Several topics can be active at the same time but only one will be used to generate proposals (this specific topic is said to have the focus).

Removes the specified topic from list of the topics that are currently used by the dialog engine to parse the human’s inputs.
Several topics can be active at the same time but only one will be used to generate proposals (this specific topic is said to have the focus).

ALDialog.setLanguage("English")topicName=ALDialog.loadTopic("/home/nao/aldialog_test_topic_file.top")ALDialog.activateTopic(topicName)ALDialog.subscribe("my_subscribe_test")try:raw_input("speak to the robot now, press Enter when finished")finally:ALDialog.unsubscribe("my_subscribe_test")ALDialog.deactivateTopic(topicName)ALDialog.unloadTopic(topicName)'

If multiple topics can be active at the same time, only one of them is used to generate proposals.
This topic is said to have the focus. A call to this function forces the focus to the specified topic.
After this call, proposals will be generated from this topic and human inputs will be first parsed through this topic.
However, if a rule from a different active topic is matched, the focus will change automatically to that topic.

Parameters:

topicName – the topic’s name returned previously by ALDialogProxy::loadTopic (also included in the first line of the .top file).

ALDialog.setLanguage("English")topic_name=ALDialog.loadTopic("/home/nao/aldialog_test_topic_file.top")topic_name_2=ALDialog.loadTopic("/home/nao/aldialog_test_topic_file_2.top")ALDialog.activateTopic(topic_name)ALDialog.activateTopic(topic_name_2)ALDialog.subscribe("my_setFocus_test")ALDialog.setFocus("mytopic")ALDialog.forceOutput()ALDialog.forceOutput()ALDialog.forceOutput()# nothing more to say from "mytopic"ALDialog.setFocus("mytopic_2")ALDialog.forceOutput()ALDialog.forceOutput()ALDialog.forceOutput()# nothing more to say from "mytopic_2"ALDialog.unsubscribe("my_setFocus_test")ALDialog.deactivateTopic(topic_name)ALDialog.deactivateTopic(topic_name_2)ALDialog.unloadTopic(topic_name)ALDialog.unloadTopic(topic_name_2)

Execution – robot’s responses

$ python aldialog_setFocus.py --ip $YOUR_ROBOTS_IP_ADDRESS
# the robot will say:
# 1) "Human, how are you?"
# 2) "What do you want to do?"
# 3) [no more proposals will be found in the first topic]
# 4) "Human, how was your day?"
# 5) "Do you like robots?"
# 6) [no more proposals will be found in the second topic]

Note

in the script, we assume that you put both the .top files in the home folder on the robot (/home/nao). You can, of course, put them elsewhere and modify the script.

Returns the name of the currently focused topic. To focus a topic, you need to match a rule of that topic. You can also force the focus with ALDialogProxy::setFocus. The currently focused topic is also used to make proposals.

ALDialogProxy::setConcept can be called only for a dynamic concept,
declared in qichat with the keyword dynamic. Static concepts (declared in qichat with concept)
are constant and cannot be modified with the ALDialog API.

Allows to add more words / sentences to an existing, non-empty dynamic concept, without overwriting it.

Warning

ALDialogProxy::addToConcept can be called only for a dynamic concept, declared in qichat with the keyword dynamic. Static concepts (declared in qichat with concept) are constant and cannot be modified with the ALDialog API.

For the given grammar, sets the minimum confidence level of a speech recognition result.
Below this threshold value, the dialog engine ignores results returned by the ASR (as if the robot did not hear anything).

Parameters:

grammar – name of the grammar for which we set the threshold: BNF, SLM or REMOTE.

For the given grammar and language, sets the minimum confidence level of a speech recognition result.
Below this threshold value, the dialog engine ignores the result (as if the robot did not hear anything).

Parameters:

grammar – name of the grammar for which we set the threshold: BNF, SLM or REMOTE.

For the given grammar and language, gets the minimum confidence level of a speech recognition result.
Below this threshold value, the dialog engine ignores the result (as if the robot did not hear anything).

Parameters:

grammar – name of the grammar whose threshold we check: BNF, SLM or REMOTE.

Sets the animated speech (speaking movement) configuration used by the dialog.
It can be different from the general one, defined with the ALSpeakingMovement API.

Parameters:

animatedSpeechConfiguration – the animated speech configuration to be used by the dialog.
It follows the same format than the one used by ALAnimatedSpeechProxy::say with local configuration.
Currently the only valid parameter is bodyLanguageMode.

Note

In order to reset the configuration to default (so that the dialog uses the general ALSpeakingMovement configuration), pass an empty vector (list in Python) to the method.

Retrieves all topics marked as collaborative dialog in application manifest.
Loads, compiles and activates these topics. Application triggers are automatically
added in speech recognition if application dialog_applauncher is installed on robot.
dialog_applauncher is a basic channel application.

This method is related to the qichat event detection feature (see also other related methods: ALDialogProxy::addBlockingEvent and ALDialogProxy::removeBlockingEvent).
If an event is not blocking (that is, does not interrupt the robot’s speach immediately), it has a limited validity, by default: 2 seconds.
If the robot is currently saying a very long sentence, the event (ex. touching the robot’s head) will time out, the robot will not react.

ALDialog.setLanguage("English")topic_name=ALDialog.loadTopic("/home/nao/aldialog_test_topic_file.top")ALDialog.activateTopic(topic_name)ALDialog.subscribe("my_setDelay_test")raw_input("\nThe robot is going to start counting from 0 to 5. ""Try touching its head's tactile sensors while it's speaking. ""The timeout is set to default (2 seconds). Press Enter to start:")ALDialog.forceInput("start counting")raw_input("\nThe robot is going to start counting from 0 to 5. ""Try touching its head's tactile sensors while it's speaking. ""The timeout is set to infinite now. Even if you touch the head when the ""robot only starts counting, it will still react after having stopped speaking. Press Enter to start:")ALDialog.setDelay("MiddleTactilTouched",-1)ALDialog.setDelay("FrontTactilTouched",-1)ALDialog.setDelay("RearTactilTouched",-1)ALDialog.forceInput("start counting")raw_input("\nThe robot is going to start counting from 0 to 5. ""Try touching its head's tactile sensors while it's speaking. ""The timeout is set back to default (2 seconds). Press Enter to start:")ALDialog.setDelay("MiddleTactilTouched",2000)ALDialog.setDelay("FrontTactilTouched",2000)ALDialog.setDelay("RearTactilTouched",2000)ALDialog.forceInput("start counting")ALDialog.unsubscribe("my_setDelay_test")ALDialog.deactivateTopic(topic_name)ALDialog.unloadTopic(topic_name)

Qichat has an event detection feature – for example touching the robot’s head can provoke a verbal reaction. With this method you can declare an event as blocking, so that it will interrupt the robot’s speech and the robot will react immediately. By default, the dialog engine waits until the speech is finished and then makes the robot react to the event. In order to restore the default behavior, use the ALDialogProxy::removeBlockingEvent method (or restart naoqi).

ALDialog.setLanguage("English")topic_name=ALDialog.loadTopic("/home/nao/aldialog_test_topic_file.top")ALDialog.activateTopic(topic_name)ALDialog.subscribe("my_deactivateTag_test")raw_input("\nThe robot is going to start counting from 0 to 5 now. ""Try touching its head's tactile sensors while it's speaking. ""It will not interrupt its speech. Press Enter to start:")ALDialog.forceInput("start counting")raw_input("\nThe robot is going to start counting from 0 to 5 now. ""Try touching its head's tactile sensors while it's speaking. ""Touching the head is now declared as a blocking event, the ""robot will react to your touch immediately. Press Enter to start:")ALDialog.addBlockingEvent("MiddleTactilTouched")ALDialog.addBlockingEvent("FrontTactilTouched")ALDialog.addBlockingEvent("RearTactilTouched")ALDialog.forceInput("start counting")raw_input("\nThe robot is going to start counting from 0 to 5 now. ""Try touching its head's tactile sensors while it's speaking. ""The default behavior is restored now, the robot won't interrupt its speech.")ALDialog.removeBlockingEvent("MiddleTactilTouched")ALDialog.removeBlockingEvent("FrontTactilTouched")ALDialog.removeBlockingEvent("RearTactilTouched")ALDialog.forceInput("start counting")ALDialog.unsubscribe("my_deactivateTag_test")ALDialog.deactivateTopic(topic_name)ALDialog.unloadTopic(topic_name)

Makes the dialog engine start making proposals automatically.
After an answer, the dialog engine will automatically say a proposal from the available topics (the proposal will be included in the answer).
Dialog engine will first try to say a proposal from the topic having the focus (see: ALDialogProxy::setFocus), then from other topics.