Are you (properly) regression testing your Conversational AI?

Are you (properly) regression testing your Conversational AI?
Another DALL-E special

I came across this post from the team at The CAI Company recently and thought it was worthwhile highlighting.

Testing is business-critical when it comes to Conversational AI. I think we all know this.

How to test... that's a key question and one that I've been discussing with many companies recently.

The chances are that your existing test team aren't necessarily setup to test your Conversational AI implementations properly.

I have been in briefing meetings where the Test Team readily assume it's essentially the same as testing any other interface.

Yes.

And no.

In some cases, it's only when the testers sit down and start pressing buttons that the magnitude of what's expected and needed becomes clear.

Here's a brief snippet from the post by The CAI Company:

What is Regression Testing? Regression testing involves re-running previously developed tests to ensure that existing functionalities still work as expected after new code changes or additions This practice helps verify that recent updates haven’t negatively impacted the existing software. Just to be clear, this applies to bots whether they’re LLM-powered, using more traditional NLU, or taking a hybrid approach.

The exact tools and techniques may vary, but the principles (and the reasons you need to test) remain the same. Regression testing can be performed manually or through automation, and as you add new tests to accompany each code update, you gradually build up a comprehensive test suite over time.

Have a read of the full article here: Bulletproof chatbots: why regression testing is non-negotiable.

Read more