Customizing Your Assistant with Intent and Context is a really simple service that allows developers to create their own basic personal AI assistant that works a bit like Siri and Amazon’s Alexa. Last week, I covered how to build your own AI assistant using where I showed the basics of setting up an AI assistant and accessing the pre-existing knowledge base that the service provides. In this article, I’d like to go a step further and introduce “intents” and “contexts”, a way of teaching our AI assistants more specific actions that are personalized to our own needs. This is where things can get really exciting.

What is an Intent?

An intent is a concept that your assistant can be taught to understand and react to with a specific action. An intent contains a range of contexts that we can enter as sentences that the user might say to our assistant. A few examples could include “Order me lunch”, “Show me today’s daily Garfield comic strip”, “Send a random gif to the SitePoint team on Slack”, “Cheer me up” and so on. Each of those would be custom intents which we could train our assistant to understand.

Creating an Intent

To create an intent, log into the agent you would like to add the new functionality to in the Console Page and click on either the “Create the first one” link, the “Create Intent” button next to the “Intents” heading at the top of the page or the “Intents” plus icon in the left hand side menu (you’ll need to click the hamburger icon to open that):

Creating a new intent in

For the sample intent for this demo’s assistant, we’d like to teach him to cheer people up when they’re feeling down with movie quotes, jokes and other things. To start, we will call the new intent “Cheer me up” and write our first trigger sentence underneath “User says”. The first sentence I’ve added below is “Cheer me up”. Hit the Enter key or click “Add” to add your sentence:

Setting up our first intent

Typically, we have a range of different ways we might say the same thing. To account for these, we will add in a range of statements which represent various ways a user might indicate they’d like cheering up such as “Make me smile” and “I feel sad”:

Cheer up sentences

Now we have a range of sentences which the assistant should understand, but we have not told it what action is expected when it hears them. To do so, we create an “action”. The assistant will return “action” names back to our web app to allow it to respond. In our case, we won’t respond to the first action which has been called “cheermeup”. We won’t actually use this action name in this demo, but it will come in handy in future when responding to actions in our web app. I’d recommend always including action names for your intents.

Creating an action

We can add in parameters into our actions too, however I will cover that in detail within my next article on!

Guiding Via Speech Response

After our user has told the agent that they’d like to be cheered up, we want to guide the conversation into the user telling the agent more about what they’d like. To do so, we provide speech responses in the form of questions within the “Speech Response” section. For example, “Let’s cheer you up! Would you like a joke or a movie quote?”

Our speech responses

Finally, we click the “Save” button next to our intent name to save our progress.

Click to save your agent

Testing Your Agent

We can test out our new intent by typing a test statement into the test console on the right. Let’s test it out by saying “Cheer me up”:

Our first test

The agent responds back with one of our trained responses as intended. We can also have variations on the phrasing of the statement, often will still work it out. For example, “Make me smile please”, “Say something to make me smile” or “I feel sad right now” will result in our intent running too:

A second test variation

One thing you might notice if you used a statement like “I’m sorry to hear that! How can I help you feel better?” is that it isn’t quite specific enough to guide the user. If they aren’t aware of the options of either “movie quote” or “joke”, then they might ask for something we don’t have covered! Over time, we can train up our agent to understand many other concepts, however for now I’d recommend being specific with your questions!

Using Contexts

By guiding the conversation with our speech response, our agent needs a way to follow what the conversation was about when the user next speaks to them. If a user says the words “A joke” or even “Either one” without any prior conversation, out of context that sentence might not be too clear for the agent to respond to. This is where setting contexts in comes in. We create contexts to track what the user and agent have been speaking about. Without contexts, each sentence would be completely isolated from the one before it.

To create a context, click the “Define contexts” link at the top of the console for your intent:

Opening the context options

Here we will have a section for input contexts and a section for output context. Input contexts tell the agent in which context the intent should be run. For our first intent, we want it to run any time, so we leave input contexts blank. Output contexts are what set up an intent to be picked up in future messages. This is the one we want:

Input and output contexts

We will create an output context called “cheering-up”. When naming a context, suggests alphanumeric names without spaces. Type in your context and hit the Enter key to add it. Then click “Save” to save your changes:

Creating our output context

If we then test out our agent by asking them to “Cheer me up” once more, the result shows our context is now appearing too:

Our context in action

Filtering Intents With Contexts

Our agent now understands that there is a conversation context of “cheering-up”. We can now set up an intent to run only if that context has occurred. As an example, we will create one possible response to our agent’s question — “A movie quote”. Go back to the menu on the left and click the plus icon to create a new intent:

Creating our second intent

We call our intent “Movie quote” and set the input context to “cheering-up”. This tells our agent that they should only consider this response to our user if they have previously asked to be cheered up. We add a few sample ways the user might respond t say “I’d like a movie quote”:

Our custom movie quote intent

Then, we scroll down and in our response, we include a range of movie quotes (feel free to include your own favorites):

Adding our movie quotes

Click “Save” next to your intent’s name once again to create your movie quote intent. Then in the test console beside it, try entering “Cheer me up” and follow it with “Movie quote”. The agent should now tell you a movie quote!

Our movie quote intent in action

You could then follow the same process to add a response for an “A joke” intent too.

We also do not necessarily need to be limited to providing our agent with a list of hard coded responses, we could instead set an action name for each intent and respond to that action within our web app. This is another concept I’ll be covering in a future article! We can be prepared for future additions by giving our “Movie quote” intent an action called “cheermeup.moviequote” (the dot helps us ensure the action doesn’t get mixed up with any future generic “moviequote” action we add in).

In Action

If you added these intents into the same personal assistant used for your web app in the previous article, the new functionality should appear automatically! If you created a new one, you will need to update the API keys in your web app first. Open up your personal assistant web app in your web browser and try it out by asking your assistant to cheer you up:

Asking Barry to cheer me up

Then, tell it you would like a movie quote and see what happens:

Following it up with asking for a movie quote


There are plenty of ways to use the concepts of intents and contexts to personalize your assistant. Chances are you already have a few ideas in mind! There is still more that we can do to train up our assistant by teaching it to recognize concepts (known as entities) within our custom intents, we will cover that next week!

What do you think?

0 points
Upvote Downvote

Total votes: 0

Upvotes: 0

Upvotes percentage: 0.000000%

Downvotes: 0

Downvotes percentage: 0.000000%

Empowering Your Assistant with Entities

How to Build Your Own AI Assistant Using