bottest.ai

bottest.ai

Automated Chatbot Testing Done Right — with no code.

5.0
1 review

91 followers

Spending countless hours manually testing your chatbot after each change? Automate the full testing process with bottest.ai, the no-code platform to build quality, reliability, and safety into your AI-based chatbot. Get started now: https://bottest.ai
bottest.ai gallery image
bottest.ai gallery image
bottest.ai gallery image
bottest.ai gallery image
bottest.ai gallery image
bottest.ai gallery image
bottest.ai gallery image
Free
Launch Team / Built With
Vy - Cross platform AI agent
Vy - Cross platform AI agent
AI agent that uses your computer, cross platform, no APIs
Promoted

What do you think? …

Noah Moscovici
Hi! I'm Noah the Founder and CEO of bottest.ai Our mission is to help AI creators building chatbots with the testing process. Often times, developers or product managers will spend hours and hours a week manually having the same conversations with their chatbots to ensure the quality is the same after each internal change. Traditional testing paradigms don't solve this issue for 3 main reasons: 1. Manual testing quickly becomes overwhelming as your chatbot evolves. 2. Developer-created evaluations eat up precious time and rarely cover all scenarios. 3. Rigid testing scripts can't capture the fluid nature of language in conversations. bottest.ai is a no-code platform to fully automate the testing of your chatbot. We use an AI-powered evaluation engine to effectively determine if the quality of your chatbot is degrading with each change. We are currently running our beta program which has full access and is completely free for all users for the next 6 months. Get started testing now! https://bottest.ai
Shashank Sanjay
@noah_moscovici Hey Noah, cool idea. I'm currently building a chatbot product. We have some evals already setup. Is there an easy way for us to import this into your tool? Check out our launch and lmk if we're actually a good fit for what you're building I signed up :)
Noah Moscovici
@shashank_sanjay Thank you! We just launched our beta program which has full access and is completely free for the next 6 months. I'm curious to learn more about your current setup. We don't have a direct to customer way to import evals yet, but depending on what type of data you have and how you are storing it there may be some easy ways we can import it for you!
Noah Moscovici
@dash4u Good question! I'll answer your question in two ways, since it seems to hit on a couple of good questions: 1. How do we evaluate whether something is "correct"? We have an AI-powered evaluation engine that breaks down the conversation between you and the chatbot, and compares it to the baseline from when you originally recorded the test. Our AI evaluation engine can accurately pick out the key differences in the Tests, and any inconsistencies or deviations from the baseline. The engine can then determine whether these differences are major enough to fail the test, or pass the test if they are just minor changes with no semantic difference. 2. What if my definition of "correct" is unique to my product? You can fully customize what constitutes a "pass" in bottest.ai. This is called "Success Criteria" and can be customized on a Test level, or on an entire Suite level. You can fully define exactly what types of difference should pass and which should fail. So if you don't care about tone or intent differences and only factual information, you can specify that in your configuration! Or maybe you're expecting the chatbot to respond with a lot of variance, and you just want to make sure the general tone/intent is the same, you can fully specify that as well. There's unlimited freedom when it comes to customizing this aspect of the bottest.ai evaluation process
Michael Green
Super excited about bottest.ai, Noah! 🙌 Automating the testing process for chatbots is a game changer. Manual testing can be such a time sink, and your no-code approach is going to help a lot of developers focus on what really matters. Can't wait to see the traction you gain during this beta phase! Upvoting this for sure!
Noah Moscovici
@michaelgreen Thank you! We really appreciate the support!
Star Boat
This sounds like a game changer, Noah! 🛠️ Automating the testing process for chatbots seems like a huge relief for developers who are constantly updating their bot. I'm curious, though—what specific types of evaluations does your AI engine perform that traditional methods might miss? Also, how do you ensure the platform adapts to different languages or dialects? Since chatbots are increasingly used globally, I wonder how your tool can handle that fluidity in conversation across diverse linguistic contexts. Looking forward to seeing how bottest.ai evolves during the beta phase!
Noah Moscovici
@star_boat Thank you for your thoughts! You asked a couple good questions so hopefully I can answer them: 1. What types of evaluations does your AI engine perform that traditional methods miss? Traditional testing methods fail when testing chatbots in 3 main ways: a) Language based responses are subjective and non-deterministic. Unlike traditional software testing, where you have an input with a determined expected output, responses from a chatbot require a nuanced evaluation based on semantic meaning. b) Upgrades or improvements in the AI can cause unexpected issues elsewhere. Each change to the underlying LLM or AI model may improve the quality of answers on some questions but cause quality degradation on other prompts. Developing a high-quality AI chatbot without extensive regression testing is practically impossible. c) The subject matter experts often aren't the ones maintaining the code. The people in your team who can best determine how the chatbot should perform are rarely the same engineers working on maintaining the tests. This makes it very difficult to build comprehensive automated test coverage for a chatbot. Our AI-evaluation engine solves the first point, allowing for language responses to be evaluated in a multi step process that can pick apart key differences you care about and skip the differences like rephrasing/synonyms or other noise. However, using an AI to test the quality of another AI isn't anything new, The true power of bottest.ai comes from the other 2 points, allowing automated regression tests across all questions whenever changes in your chatbot happens, and the ability for product owners to directly manage the tests and quality (no more passing prompts and expected responses to developers, who then have to judge whether a response is "good" when they aren't experts in the niche of your chatbot!) --- 2. How do you ensure different language support? Our AI-powered evaluation engine is built language agnostic. Your failure reasons/details or any information about the tests will be in English, but our engine can handle any language that the conversation may be happening in. For example, if your chatbot serves an international customer base, you can record and test in that conversation, but all of the information that you need (such as why tests failed) will stay in English. --- Hopefully that answers your questions! Thank you for your support, and let me know if I can answer any other questions :)
Star Boat
@noah_moscovici Thanks for your reply, that's help a lot. :)
Elke
This looks promising, Noah! The automation aspect of testing chatbots is definitely a game-changer. How does the AI-powered evaluation engine actually work? Will it adapt as the chatbot’s conversation style evolves, or do users need to manually tweak anything? I'm curious if there's any kind of analytics or reporting that comes with the testing process, especially to track performance over time. Overall, really excited to see how this could improve the testing workflow!
Noah Moscovici
@elke_qin Thanks for the kind words! You asked some amazing questions: 1. How does the AI-powered evaluation engine actually work? We have a multi step process in our evaluation engine, including 1) picking out any differences between the current conversation and the baseline one. 2) determining the severity of each difference based on our "Success Criteria" 3) overall determining a pass or fail based on these evaluated differences. 2, Will it adapt as the chatbot's conversation style evolves? Will users need to manually tweak anything? I hope I understand this question correctly in that you're asking if our evaluation engine will have a dynamic understanding of a chatbot's style of speaking and can evolve to better understand what a "pass" or "fail" means to you. Here's the great thing: you don't have to use our "Success Criteria" and you have full customization powers to define exactly what a "pass" and "fail" should be for your product and your situation! You can define this Success Criteria on an individual Test level, or altogether on a Suite level. This Success Criteria will be used in our evaluation engine to determine whether the test passes or fails, so you can really customize and fine tune the process to match up with exactly how a pass and fail should be for your use-case. 3. Are there any analytics or reports? Yes! This is a huge part of the bottest.ai platform, and it's something that is of upmost importance when testing chatbots (but is almost always forgotten when using in-house testing or custom build solutions due to budget/time constraints). After every single Suite Run (this is when you run all the Tests in your Suite at once), an automatic report will be generated that compares the Run against a previous one and highlights key differences/improvements/degradations so you have full visibility on how your Tests are performing. Additionally, there is a dedicated Analytics page where you can track your trending Success data, Performance data (how long responses took for your chatbot), and Usage data. Analytics and reporting is something we spent a lot of time on at bottest.ai to make sure this feature is useful for our users, and we encourage you to check it out! The whole platform is completely free to use during our beta program for the next 6 months, so I invite you to try it out and let us know your thoughts!
Christopher David Anderson
Congrats, Noah! 🚀 This looks like a game-changer for AI chatbot developers. Automating testing will definitely save tons of time and improve quality. Excited to see how bottest.ai evolves! #Makers #PH
Noah Moscovici
@christopherdavidanderson Thanks for the feedback! We appreciate it a lot!
SEN
This is such a timely solution, Noah! Manual testing can really suck up time and resources for chatbot developers. The AI-powered evaluation engine you mentioned is especially appealing, since it addresses the fluid nature of language that traditional scripts often overlook. I'm excited to see how this can enhance the quality and reliability of chatbots without the hassle of extensive manual testing. Definitely going to check out the beta and give it a try! This could save so many hours for developers and product managers alike. Kudos to you and your team for taking on this challenge! Upvoting and looking forward to the future of chatbot testing!
Noah Moscovici
@big_tree Thanks for the feedback and comment! We appreciate your support! Definitely reach out if you have any issues or questions, we want to help support you!
Bryan
Congrats, Noah! 🚀 It's amazing to see how bottest.ai addresses such a critical challenge in chatbot development. Manual testing is truly a massive time sink, and your no-code solution sounds like a game changer. The AI-powered evaluation engine seems to provide exactly what developers need to ensure quality without the hassle. Excited to see more Makers adopt this in their workflow. Looking forward to the feedback from the beta program! Keep up the great work!
Noah Moscovici
@dance17219 Thank you! We really appreciate your support and your feedback!
123
Next
Last