21 September 2018

Build an intelligent Bot with Microsoft Bot Framework

In general, a bot is a set of algorithms, which, like a traditional application, allows the user to navigate predefined menus and perform a limited set of tasks.

What differentiates a bot from a traditional application is the user interaction, unlike what happens in a traditional application, the user interacts with a bot by using natural language (text or sound). A bot uses a set of advanced tools like artificial intelligence and machine learning to better understand and perform the actions wanted by the user.

1. FRAMEWORK ARCHITECTURE

An Azure Bot framework can be represented as following:

Figure 1 – Bot Architecture

• Bot Service (Bot Registration Channel)
The Microsoft bots framework is supported by a vast set of channels (Skype, Facebook Messenger, Microsoft Teams, etc.). The service allows the developer to define which channels may use the bot and the type of communication in those channels.

All messages are received by this service, which then forwards them to the Web API, returning the reply message to the user.

• Web API Service
This API is responsible for replying to the users, works like an endpoint and it is here that all the logic behind the bot should be developed. However, the concept of dialogs and some types of interfaces already exist. This service can also be integrated with other externals systems.

• Bot State Service
To increase the performance and efficiency in the conversation between the bot and the user, the service keeps a state stored in the bot state service every time a new activity is recorded.

This service is asynchronous and the bot states can be stored in memory or cache. Since we keep the different states in a DB, we can restore the state at any point in time, allowing us to continue any conversation previously initialized.

1.1. ARTIFICIAL INTELLIGENCE AND MICROSOFT COGNITIVE SERVICES

Currently, Microsoft already provides solutions (API’s and SDK’s) to extend the bot capabilities using AI through Microsoft Cognitive Services.

Some of those solutions are split in the following categories:

• Vision:
> Computer Vision: Allows processing and analyses of images uploaded by the user to produce answers;
> Content Moderator: Monitoring of potentially offensive content (supports images, videos and text)
> Face API: Currently used to face recognition (also, the older Emotion API is embedded in this solution)..

• Speech:
> Speech API: This functionality enables transformation of human speech into text or vice-versa allowing for voice commands.
> Translator Speech: Provides the functionality to transform audio to other languages.

• Language:
> Bing Spell Check: Provides a way to correct both grammar and spelling errors.
> LUIS: Allows the application to better understand the user words and reply to them efficiently.

• Knowledge:
> Custom Service Decision Service: Allows application to gain intelligence through experience. This system incorporates the user opinions to make decisions in real time.
• Search:
> Bing Image Search: Returns images that Bing determine are relevant to the user.

2. NATURAL LANGUAGE

The Natural Language is a part of Artificial Intelligence that helps bots and applications to understand human language, both in audio or text. Not so long ago, it was humans trying to understand computational languages to communicate with machines, using Java, C# and others. With the technology evolution, the process reverts and now we observe a new phenomenon: machine learning to understand human language.

Nowadays, it is easy for a computer to make complex calculations, but probably, if you ask it to find the differences between two similar images or identify human emotions in an image, it will be hard to obtain an immediate reply. However, for a human, the process should be the reverse, he should be able to quickly recognize emotions but hardly will know the answer to a complex calculation.

AI helps machines to understand emotions and words as the human brain does. Every person talks and writes in their own way and a same sentence can be written in multiple ways and have the same meaning. We all have a different way to express ourselves even if we have the same intent.

For example: if we search a hotel for vacations, two individuals will use different vocabulary and punctuation, and this is the big challenge nowadays. Using NL, we can minimize this challenge.

In this chapter, we will talk about a Microsoft Cognitive Services service called LUIS (Language Understanding Intelligent Services). Microsoft offers free subscription with up to 10 000 monthly transactions. This service helps with the hard quest of understanding human language.

2.1. LUIS

LUIS aims to understand user phrases and for that, it integrates the bot framework. It finds the phrase intent identifying the subject and references increase the bot intelligence.

This application already provides a vast set of tools that enable the creation of simple phrase comprehension models. However, it is necessary that LUIS receives business related input to respond with credibility.

After published, the LUIS application is ready to receive and process requests (expressions), through HTTP requests and return the most accurate user action (intent). The messages exchange is done using JSON objects.

LUIS is composed by:

• Intents
The focus of this application is to understand the intention of the user sentences. Users can write any text with varied vocabulary and punctuation, so the developer must provide all the necessary intentions (intentions meaning all the motives, ideas or objectives from the user that will interact with the Application).

The reasons fall into two main categories:
> Search: For example, the user wants to search for a file.
> Ask or request an action: Book a flight, book a restaurant, etc.

• Entities
Entities are as important as understanding the user intention. For the application to be able to accurately reply, it has to understand the received parameters. For example: to search for a specific file in the database, the application needs to, apart from knowing the user intention to find the file, what is the needed file. Otherwise, it will not be able to accurately reply to the user input.

Example: Search for a file abc_123 dated of 2nd of January.
> The intention of searching for a file;
> The entities will be abc_123 and 2nd of January.

• Utterances
After defining the intentions, LUIS should contain a group of defined expressions to be able to teach/train himself to identify a phrase intentions and entities.

It is important that the application is able to learn so it can increase its responses accuracy in the future.

• Features
Features are metadata or attributes of a specific intention or entity. They help to globally increase the NLP algorithm efficiency.

3. CREATING AN INTELLIGENT BOT

The demonstrations in this article are based on .NET and Visual Studio IDE technologies.

In order to follow this article, it is recommended to use the Visual Studio 2017 Community for developing the application in C#. Currently the Bot framework is in preview phase, so it is necessary to manually add the "Bot Application" template.

The template creates a basic bot, called “EchoBot”, which is a nice starting point to explore and develop new functionalities. The interaction with “EchoBot” is basic, it simply counts the number of characters of the user’s sentence and prints it to the output.

3.1. INSTALL VISUAL STUDIO 2017 COMMUNITY, BOT EMULATOR EMPLATE AND SDK

 

1. Download do visual studio at https://www.visualstudio.com/downloads/ and install it.

2. Download bot template at https://marketplace.visualstudio.com/items?itemName=BotBuilder.BotBuilderV3.

3. Without extracting, move the .zip file to the c# template folder (…\Documents\Visual Studio 2017\Templates\ProjectTemplates\Visual C#).

4. Open visual studio in administration mode and create a new project using the template “Bot Application”.

5. In order to add more sophisticated functionalities, the Bot builder SDK is required. Just execute the command “Install-Package Microsoft.Bot.Builder” in visual studio command line.

6. After the creation of the project, build the project (Ctrl + Shift + B) and execute it (Ctrl + F5). A new window will be open in the browser like the image below.

Figure 2 – Testing Bot Emulator connection

 

3.2. BOT FRAMEWORK EMULATOR

Bot Emulator is a tool that connects to the bot application and simulates a conversational channel. An alternative to this tool would be to create a custom application, which requires more effort.

When the bot is registered in the Azure portal, it is also possible to interact using the "Test in Web Chat" option.

To install the emulator, follow the steps:

1. Download the .exe at https://github.com/Microsoft/BotFramework-Emulator/releases;

2. Open the emulator and select the option “New bot configuration”;

3. Name the bot and indicate the URL. It is the same URL assigned to the API, with the suffix “api/messages”. Example http://localhost:3979/api/messages.

Figure 3 – Testing EchoBot with Bot Emulator

3.3. LUIS

The EchoBot we just created is useless, it only prints the phrase number of characters, and it cannot interpret user intentions. To add some intelligence to the bot, we use LUIS. LUIS is a REST API, so it can interact with other applications through JSON messages.

Go to https://www.luis.ai/ in order to create a new LUIS application using Microsoft credentials.

Figure 4 – Creating LUIS Application

Now that the application is created, the first step is to define the Intents. We can create Intents from predefined examples, through the option “Add prebuild domain intent”. The “None” Intent is always created by default and it will be used when the application cannot predict the user intention.

Create two Intents, one called “SearchFlights” and other called “SearchHotels”.

Figure 5 – List of Intents

Now we need to create Entities that can be mapped in Intents. Create two Entities “Origin” and “Destination”. In the context of flight search, both will be used to map the places of origin and destination. In the context of hotel search, we only need to know the destination.

Figure 6 – List of Entities

After creating the domain of intentions and entities, the next step is to indicate the Utterances (sample sentences). The more Utterances are introduced, the more accurately the bot will identify the user’s intentions.

Figure 7 – List of Utterances for the Intent “SearchFlights”

Figure 8 – List of Utterances for the Intent “Search Hotels”

After creating Intents, Entities and Utterances, the last step is to train the bot and test it. Select the options “Train” and “Test”. After testing the application, it is necessary to publish it so that it can be integrated with our bot. Select the “Publish” option and copy the generated “Key String” key.

.

3.4. INTEGRATE LUIS WITH BOT IN VISUAL STUDIO USING C#

In the C # project created in the first step, there are two key files for bot operation. The "MessagesController.cs" file that is in the "Controllers" folder, and the "CustomRootDialog.cs" file in the "Dialogs" folder.

Use the file “CustomRootDialog.cs” to build the logic to interact with LUIS application. Define the LUIS Model, indicating the application ID and the Key String value. For each LUIS Intent, we may have different actions. The “None” Intent, for example, just prints the message “I do not understand your message!”. The other two intents, “SearchFlights” and “SearchHotels” will call a function “ResultToText” responsible to evaluate the best Intent and list the Entities found on the user input.

Figure 9 – C# code in “CustomRootDialog.cs”

Finally, it is necessary to change the file “MessagesController.cs”, which will invoke the “CustomRootDialog” class. Edit and change the expression “await Conversation.SendAsync (activity, () => new Dialogs.RootDialog ());” with “await Conversation.SendAsync (activity, MakeRoot);”. Create the MakeRoot () method inside the MessagesController class.

Figure 10 – MakeRoot method

3.5. BUILD PROJECT AND TEST WITH BOT EMULATOR

Save the project, build the project and now we can use the emulator to test the new bot capabilities. When the user submits text in the bot application, the bot evaluates and returns the best Intention, as well as all Entities found in the text.

Figure 11 – Testing new bot capabilities with Bot Emulator

4. CONCLUSION

With the evolution of technology, users prefer simpler and more dynamic interfaces, using natural language to communicate, instead of traditional menus and windows.

Microsoft Bot Framework provides a set of tools that allow you to create bots and integrate them into multiple channels like Slack our Skype. The framework is Open Source and can communicate with other applications using a REST API interface.

Increase the intelligence of bot applications through integration with Microsoft Cognitive Services, and allow it to understand natural language with LUIS.

       Ana Guerreiro         Daniel Rodrigues
 Associate Consultant          Consultant
Blog