Multiple QnA Services in one Bot using Nodejs

I have a Bot with luis recognizer & 1 Qna service. The way I currently consume QnA answers is I use luis intent to trigger a dialog flow and just make a simple Http request to QnA. I would like to expand this bot to multiple QnA KB's or multiple services with each on its own KB's.

Produce info QnA

Device support QnA

Visa issue QnA

When I thought about doing this there were some approaches in mind.

Approach 1:- Create LUIS intents for above QnA's , copy paste all questions from above QnA's to luis intents and, just use http request to call each QnA.
cons:- Each time qna questions changes, I would have to update luis intents, lots of manual copy work.

I'm new to Electron JS but have been working as Front-end developer for many years. I'm having problems when trying to make a dist for my simple app.

In short, I'm able to build the app and use 'npm start' to run it. The app works perfectly under these conditions. But as soon as I try to use 'electron-builder' the whole project fails and won't run anymore... even if I revert to the original working 'package.json' it errors. I'm not sure what I'm doing wrong.

Does anyone know of a more simple way to build the final app? I've tried electron-builder and electron-packager, but both fail for me. Are there any IDE out there that can natively make distributions for Electron JS (similar to xcode build)? Or are there better alternatives to electron-builder and electron-packager?

I'm getting a bit frustrated because I have to recreate the whole project file every time to get my app running again.

However, I also a route to serve in my node app and the response I get is out of scope in the route. I want to add the values from my request into view using EJS but with EJS i have to define the values to be used by the file in the .render() and getting those values just seems to not be possible.

Context: We (a retailer) want to offer our customers the option to upload a picture of a product that they got from our social media accounts, to get all the details about that product (Prices, Stock, description, Variables...etc).

The cognitive service should be able to match and identify which image is this from the image storage.

My question: Do you know a way to use the cognitive services for this use case?

I'm evaluating "Bing Speech API" and its new brother "Speech Service" (still in preview mode) for a simple voice recognition mobile application based on Xamarin.

I've obtained good results using the API REST but it has a constraint duration length of 15 seconds that make it hard to apply for continuous voice recognition and wake word.

Due to this reason I've also explored Bing Speech and Speech Service SDKs based on web socket (also knows as client library).
They work well on desktop application but they seem to not be compatible with Xamarin (see picture below).

Do you know any possible way to use Bing Speech API or Spech Service client library (Web Socket) to handle a continuous voice recognition in a Xamarin project ?

The only alternative I've found at the moment is move away from Xamarin and use the dedicated Bing Speech API/Speech Service client library for Android and IoS...

I have a text input device which works similarly to a swipe keyboard in that there are some gestures that produces one word whereas other gestures could be interpreted as a number of different words.

So currently, the challenge is so that the input device eventually builds a sentence (kind of, as shown below) from the users inputs and I'm wondering what sort of approach, algorithm, or API should I use so as to select the correct word based on context and then output a sentence that actually makes sense.

"This is a test/rest sentence for which I need an algorithm to figure our/out a corrected version of/or."

(The word choices are separated by the forward slash)

It would also be ideal if the algorithm could do it in real time as the sentence was being inputted. Any help would be appreciated! Thanks in advance!

I'm using bot-Framework SDK3 and QnA maker service.
I'm trying to use "PromptDialog.Choice" to show my QnA knowledge base's metadatas (filters) which I think i could make my FAQ bot more smarter.

I will show users choices(metadatas) which is need to select at first. But if users just want to talk with bot directly without selecting anything. I don't know how make it work. It seems that we have to select an option if using "PromptDialog.Choice". Any better ways to recommend?

I want to allow user input what ever he want, even bot give a Prompt.Choice. But I don't know how to handle them in ResumeAfterSelectCategory

private async Task SelectCategory(IDialogContext context)
{
List<string> options = new List<string>();
options = category.Keys.ToList();
options.Add("Category1");
options.Add("Category2");
options.Add("Category3");
PromptDialog.Choice(context: context,
resume: this.ResumeAfterSelectCategory,
options: options,
prompt: "which one do you prefer?",
retry: "choose one below",
attempts: 0,
promptStyle: PromptStyle.Auto);
}
private async Task ResumeAfterSelectCategory(IDialogContext context, IAwaitable<string> result)
{
// I can use it to get the message which user input
// But I don't know how to judge whether it is user typed or clicked. And if user typed, it will show a message "you tried to many times"
// var selected2 = context.Activity.AsMessageActivity().Text.ToString();
// This one can get the option which user click
var selected= await result;
}

can you please help me to understand what is the difference between the intent.match and dialog' matches. I tried the following way too but it does not recognize the answering entity. How can be QnA be called from the following code?

Please help me understand whats wrong in the above codes. Would be a great help.
Also, the difference between the intent.match and dialogs match. As per my understanding, bot recognizes the session.message and match it with the 'match:' argurments and call the dialogs. so it can jump to and fro between dialogs.

in the first case, its strange for me because it doesnot do it second time.