Feature Friday-All You Need to Know About Sage-Chatbots Testing Feature
They’re everywhere! Now that summer is rolling through the Northern Hemisphere, BUGS are everywhere! Yes, I’m talking about real, physical bugs, not the ones we find in our software. But, I suppose those are abundant everywhere, as well. Similarly, BOTS are now everywhere. From crawlers to transactional and informational bots and even Conversational Agents – or chatbots – the focus of this Feature Friday! Today, we’re here to introduce you to our new tool to test and train AI chatbots – SAGE.
What do you mean?
Chatbots are becoming more and more intelligent with advances in the area of Natural Language Processing and Understanding. But still, organizations and industries are hesitant and skeptical to adopt these agents. Not because chatbots are hard to build or develop — one can develop a chatbot easily using Google DialogFlow or Amazon Lex — but they are hard to test and maintain.
Now, what if we say: we have a bot (we have named it – SAGE) that can connect to your Conversational Agent/chatbot, have a conversation with regard to your bot’s functionality, and rate your bot’s performance. Just like AI Gilfoyle having a chat with AI Dinesh, but without breaking anything. You might be thinking, “That’s sick!” It doesn’t end here; SAGE also helps your bot’s performance by providing it with data related to all the failed conversations they had.
What is SAGE?
SAGE is our Conversational Testing Platform, the very first of its kind. Currently, it’s under wraps, so we will try to divulge as much as we can to get you excited. We wanted to develop a Super Agent Generalized on Everything, and we started developing SAGE. Currently, it works with the English language, and there is a long way for it to go.
SAGE can be seamlessly integrated with chatbot development platforms like Google’s DialogFlow, Meta’s Wit, and Amazon’s Lex to test and maintain your Conversational Agent/chatbot. SAGE tests the agent’s capability to understand and reply to phrases/utterances by creating hundreds to thousands of virtual conversations with your agent in just minutes.
We already have a lot of test automation functionalities to test APIs, Web Applications, and Native Mobile Applications in Qyrus, but we wanted to develop (still developing) something new which has never been developed or even imagined.
With the advancements in Natural Language Processing and Understanding, we thought, why don’t we create a Super Agent, which can ”test” other Conversational Agents/chatbots? Moreover, it’s hard to benchmark/test AI, and there is a scarcity of good platforms that can aide in testing and maintaining chatbots.
What does SAGE solve?
In the field of Machine Learning and AI, the models are benchmarked using a set of test data, and the model’s effectiveness is decided based on its performance on that data. Let’s take an example of a conversational agent who is trained to provide customers with details regarding their credit and debit accounts, transactions, purchases, and deposits. In this case, the agent needs to learn what type of sentences or phrases point to which of the above-mentioned category.
Moreover, the same thing can be asked in many different ways. Let’s say I want to know the outstanding amount on my credit card; I can ask the agent in a multitude of ways — what’s my credit card bill for this month, or what’s the due credit, or what’s the minimum due for this month? Still, these are normal phrases but if a non-native speaker is trying to interact with this bot and if they make a spelling mistake or the structure of the sentence is not correct — for example, “What account balance?” instead of, ”What’s my account balance?” — the bot won’t understand it and fail to provide an appropriate response. Not all of these test cases will be covered in the training, validation, or test datasets. Only when the user gets their hands on the agent, the developer comes to know about all of the shortcomings.
We have added different testing modes in SAGE which helps the developer check how their agent is performing with badly written phrases, phrases with spelling and grammatical errors, phrases with different slang, phrases with non-trivial words, etc. Plus, we provide a data upload integration, so the developer just needs to upload those phrases which their agent couldn’t understand and re-train in just a few clicks. These testing features and modes help the developers to add more and more features to their agents while continuously testing, benchmarking, and validating the new version using SAGE.
How does SAGE work?
Chatbots are everywhere, like it or not. We use chatbot assistants like Siri, Alexa, or Google to get different things done from time to time. Some might use it to set reminders or timers, some might use it to check facts, some might have a casual chat with it, or some might even use it to control household appliances and electronics.
Similar to these assistants, there can be a conversational agent for banks wherein a user can check their balance or transfer money by just chatting with the agent, or a conversational agent to track all your appointments and meetings, or a conversational agent for a restaurant support system. Applications like these, which are customer-facing for support and engagement are moving towards such conversational agents to minimize cost and human intervention for servicing and support. Platforms like DialogFlow and Wit are also boosting the adoption of Conversational Agents, but testing and maintenance remain an unsolved problem.
This is where SAGE comes in and alleviates the chatbot testing and maintenance issues.
With SAGE we offer two types of testing abilities for chatbots:
- Intent Testing
- Entity Testing
What is Intent Testing?
Every chatbot must understand what the user is saying and act accordingly. The process of understanding what the user is trying to say by typing or speaking is called Intent detection. The output of the actions (i.e. typing or speaking) is called utterance. The chatbot first needs to understand this utterance and map it to a pre-defined intent.
If the bot fails at this very first step (and trust us, many bots fail) the conversation won’t move forward, and the user might close the application and move on.
Using SAGE, you can avoid this. You need only to add your pre-defined intents into SAGE and provide your bot credentials, and it connects to your bot and generates hundreds to thousands of conversations with the bot based on the pre-defined intents. Thus, testing your bot’s ability to understand intent.
What is Entity Testing?
When you ask Siri to “Set a timer for ten minutes…” it understands the intent first and then extracts the words “ten” and “minutes” from the utterance and sets the timer based on that. These words are what we call entities. One could use different units of time to set an alarm or timer. Similarly, there can be other cases where different words can be used for the same entity type.
Based on the intent and the type of entity the chatbot supports, SAGE creates a lot of words of the same type and converses with the chatbot by putting these generated words into the conversation to check the bot’s capability of detecting entities.
We hope you like what you learned about SAGE. Again, this is just a sneak peek at what’s to come! SAGE is still being developed, but we have seen amazing leaps and bounds thus far with what we already have. Join us again in future postings to learn more about the amazing abilities of SAGE. For now, close your laptops or shut down your machines and enjoy the weekend! Stay cool, stay safe!