Deconflicting steps (After chatlog review and examples have been added)

Updated on December 12, 2022
  • To check for conflicts from the new examples that you have added, click on Train Bot and select the Check Conflicts option at the top bar.
  • Do note that it will take some time for the AI-generated assistant to complete this process. If the FAQ has more intents and examples, it will take a longer time for the AI-generated assistant to check for conflicts.
  • When the process has completed, each intent’s status will be shown under the Examples column with the coloured bubble.
  • A green tick indicates that this intent has been tested for conflict and the examples are good examples. Every example trained has a high confidence score.
  • An orange question mark indicated that new variations are added to the intent and have not been tested for conflict. This marks as a reminder for the dashboard user to train your AI model.
  • A red exclamation mark indicates that the AI model has classified a particular example to another intent. It serves as a warning to tell the user that this intent has conflicted examples.
  • Solution: We can revise this example, move it to the top prediction intent or delete it.
Did we help you?

Learn more about KeyReply

To see our Conversational AI in action, sign-up for a guided walk-through and see how our platform can be tailored for your organization’s needs