In interactive learning mode, you provide feedback to your bot while you talk
to it. This is a powerful way
to explore what your bot can do, and the easiest way to fix any mistakes
it makes. One advantage of machine learning-based dialogue is that when
your bot doesn’t know how to do something yet, you can just teach it!
Some people call this Software 2.0.

The chat history and slot values are printed to the screen, which
should be all the information your need to decide what the correct
next action is.

In this case, the bot chose the
right action (utter_greet), so we type y.
Then we type y again, because action_listen is the correct
action after greeting. We continue this loop, chatting with the bot,
until the bot chooses the wrong action.

For this example we are going to use the concertbot example,
so make sure you have the domain & data for it. You can download
the data from github examples/concertbot.

If you ask /search_concerts, the bot should suggest
action_search_concerts and then action_listen (the confidence at which
the policy selected its next action will be displayed next to the action name).
Now let’s enter /compare_reviews as the next user message.
The bot might choose the wrong one out of the two
possibilities (depending on the training run, it might also be correct):

In this case, the bot should action_show_concert_reviews (rather than venue
reviews!) so we select that action.

Now we can keep talking to the bot for as long as we like to create a longer
conversation. At any point you can press Ctrl-C and the bot will
provide you with exit options. You can write your newly-created stories and NLU
data to files. You can also go back a step if you made a mistake when providing
feedback.

Make sure to combine the dumped stories and NLU examples with your original
training data for the next training.

The form logic is described by your FormAction class, and not by the stories.
The machine learning policies should not have to learn this behavior, and should
not get confused if you later change your form action, for example by adding or
removing a required slot.
When you use interactive learning to generate stories containing a form,
the conversation steps handled by the form
get a form: prefix. This tells Rasa Core to ignore these steps when training
your other policies. There is nothing special you have to do here, all of the form’s
happy paths are still covered by the basic story given in Basics.

Every time the user responds with something other than the requested slot or
any of the required slots,
you will be asked whether you want the form action to try and extract a slot
from the user’s message when returning to the form. This is best explained with
and example:

Here the user asked to stop the form, and the bot asks the user whether they’re sure
they don’t want to continue. The user says they want to continue (the /affirm intent).
Here outdoor_seating has a from_intent slot mapping (mapping
the /affirm intent to True), so this user input could be used to fill
that slot. However, in this case the user is just responding to the
“do you want to continue?” question and so you select n, the user input
should not be validated. The bot will then continue to ask for the
outdoor_seating slot again.

Warning

If there is a conflicting story in your training data, i.e. you just chose
to validate the input (meaning it will be printed with the forms: prefix),
but your stories file contains the same story where you don’t validate
the input (meaning it’s without the forms: prefix), you will need to make
sure to remove this conflicting story. When this happens, there is a warning
prompt that reminds you to do this:

WARNING: FormPolicy predicted no form validation based on previous training
stories. Make sure to remove contradictory stories from training data

Once you’ve removed that story, you can press enter and continue with
interactive learning

We have a very active support community on Rasa Community Forum
that is happy to help you with your questions. If you have any feedback for us or a specific
suggestion for improving the docs, feel free to share it by creating an issue on Rasa Core
GitHub repository.