Automated Testing for Chatbots on Codeship with TestMyBot

Codeship NewsDevOps

Reading Time: 5 minutes

This article was originally published on Chatbots Magazine by Florian Treml. With their kind permission, we’re sharing it here for Codeship readers.

One of the most hyped technology trends recently is chatbots. There are tons of articles about how to develop a simple chatbot (for example, here or here) using the Facebook Messenger Platform or the Slack API. The technical basics are rather simple, the APIs and SKDs well documented…even beginners will be able to launch a chatbot within days or even hours. You don’t even need your own server to run a chatbot; Heroku (even the free tier) or AWS Lambda are sufficient.

But while there are new chatbots announced every day, to my surprise most of them are either pretty basic or have very bad quality (in the sense of software quality).

CleverBot

Natural Language Processing Is Evolving

One of the reasons most chatbots fail to provide a satisfying user experience is the low level of natural language processing they offer. It isn’t really the fault of the chatbot developers themselves, but the current AI platforms are far away from a human’s capabilities. This will change for sure in the next few years.

Quality Assurance…Or Not

More important: the bad quality of the APIs that chatbot developers are using is in no way an excuse to not apply state-of-the-art quality assurance concepts to chatbot projects!

A quick side note: IMHO, the reason most chatbot developers neglect the need for testing is that most chatbot developers are young and have little experience with large software development projects.

This article provides a step-by-step introduction for how to add automated testing to your CI pipeline (continuous integration). I’m starting from a plain Facebook Messenger Bot sample (a scientific calculator), and the outcome is:

  • A suite of conversations this chatbot should be able to handle
  • A Codeship pipeline triggered with each Push on GitHub
  • A test runner (TestMyBot) running all conversations against the chatbot
  • A test report showing successful and failing conversations

The project I’m talking about is published on GitHub.

TestMyBot — Automated Testing for Chatbots

TestMyBot is a test automation framework for your chatbot project. It is unopinionated and completely agnostic about any involved development tools. Best of all, it’s free and open source.

Your test cases are recorded by Capture & Replay tools and run against your chatbot implementation automatically over and over again. It is meant to be included in your continuous integration pipeline, just as with your unit tests.

test case

Step 1 – Installation and configuration

The TestMyBot library and Jasmine as test runner can be added to your chatbot project with some simple NPM commands:

$ npm install testmybot --save-dev
$ npm install jasmine --save-dev
$ ./node_modules/.bin/jasmine init/code>

Add a file spec/testmybot.spec.js with this content:

const bot = require('testmybot');
const botHelper = require('testmybot/helper/jasmine');

botHelper.setupJasmineTestSuite(60000);

Add a file testmybot.json to the project directory:

{ 
  "containermode": "local"
}

The file jasmine.js in the GitHub project directory wires the chatbot code with the TestMyBot code to enable automated conversation testing and configures Jasmine to output an XML test report. The file jasmine-export-testreport.js converts the XML test report into pretty HTML.

TestMyBot comes with integrated helpers for Jasmine and Mocha, but can be used with other test runners and assertion libraries as well. When using in Docker-mode, it additionally can be used with chatbot projects written in other programming languages as well –- see documentation on GitHub. There are samples for many different project types.

Step 2 – Composing the test cases

With a chatbot, test cases are conversations the chatbot should be able to handle. The conversation transcript should run automatically, and any difference from the transcript should be reported as error. When you’ve ever worked in a software development project you may be aware that programming test cases for automated tests is a huge effort – rule of thumb is that the development effort is doubled. But with TestMyBot it’s a piece of cake.

TestMyBot includes tools for capturing a conversation and saving the transcript as conversation test case. This can be done within minutes. So when running this command, $ node chat.ps, the chatbot is started in a sandbox environment, and you can start chatting and save the conversation transcript afterwards. Easy peasy.

There is a browser based tool available as well, or you can even write the transcripts manually with a text editor. You can find more information in the documentation on GitHub or in this blog article.

In the sample project there are several test cases predefined.

Predefined test cases

Step 3 – Codeship continuous integration configuration

Codeship already contains all necessary tools out of the box. Just start a new project and point it to the GitHub repository (see documentation). The continuous integration build will be triggered with each Push to the GitHub repository.

Configure your tests

Codeship test setup commands

$ npm install
$ pip install awscli

These commands install the project dependencies (TestMyBot, Jasmine and others) and the Amazon AWS tools to publish the test reports to an Amazon S3 Bucket.

Codeship test commands

$ npm test
$ npm run-script test-export
$ aws s3 cp test-html-report.html s3://testmybot-sample-calculator/test-html-report.$(date +’%Y_%m_%d_%H%M’).html

Now comes the interesting part: this is exactly where the test cases get executed by TestMyBot, the test report is generated and published to the Amazon S3 Bucket.

Whenever one of the test cases fails, you will be notified by Codeship (see project notification settings).

Step 4 – Viewing the test reports

This sample uses some Jasmine extensions (this and this) to output a pretty formatted website and publishes this test reports to an Amazon S3 bucket.

execution report

Conclusion

While there are plenty of tools available for easier chatbot development, the existing quality assurance and test automation tools are pretty hard to adapt for chatbot projects. TestMyBot tries to fill this gap and unleashes its full power in combination with a CI service like Codeship.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.