User research powered by emotion recognition

smileML is a user research software that analyzes user emotion in interview videos, transcribes speech, and allows you to share insight with collaborative tools. With a visual emotion timeline, you can see moments of delight or frustration and jump right to them.

Would you recommend this product?
No reviews yet
I'd love to test it and share feedback. However, not living in the US so 🤷🏼‍♂️
@evertsemeijn thank you for letting us know! we have GDPR compliance on our roadmap but we will certainly be bumping this up based on how much international interest we've received today; where are you based?
Hey folks, We’re very excited to share what we’ve been working on with all of you! We’ll be around in the comments to answer questions throughout the day, but here’s some info on smileML: smileML is a user research software that analyzes user emotion in interview videos, transcribes speech, and allows you to share insight with collaborative tools. Gone are the days of keeping notes and timestamps in Google Docs, tracking down user testing interview videos, and sifting through them until you find the moment you wanted to share with a coworker. With smileML, you can easily organize user research projects and seamlessly share insight across teams. Our machine learning algorithms analyze user testing interview videos and produce a timeline of emotion throughout the video. With a visual timeline, you can see where the moments of delight or frustration are and jump right to them. Focus on the Moments that Matter • Eliminate sifting through hours of raw video • Identify key moments of delight and frustration within raw video • Search through video and a time-stamped transcript using keywords • Export key moments for presentations Drive Insight Through Collaboration • Group user testing interviews into projects • Compare emotions and key moments among participants • Collaborate and share notes across teams • Tag coworkers in time-stamped notes to share insight
Hey everyone, We’re so happy to finally launch and hear your thoughts! Here are some FAQs about the app: 1. How many expressions can you classify? We classify 3 expressions: positive, intense, and neutral. Positive is primarily activated by smiles, intense by furrowed eyebrows or otherwise strained expressions, and neutral as a relaxed expression. As our dataset grows more robust, we plan on offering a more nuanced classification. 2. What does it cost? It’s free to download and every account starts with 3 projects that have up to 3 hours of transcription/analysis each! If your needs go beyond this free tier, please contact either myself ( or my co-founder ( 3. How do I use the application? After you’ve made an account, create a project by entering a name and (optional) key words about the project. Then, add videos by selecting an mp4, giving it a name, and again providing (optional) key words that will assist in the transcription. Click “Process”, and the app will begin analyzing the video! After it’s finished, click on the video to view the emotion timeline and transcript! Find key moments and take time-stamped notes to remember them. Invite and tag other team members to quickly share insights! 4. How long does analysis take? It depends on the quality of video, but a 10 minute 720p video will take roughly 10 minutes to analyze. 5. How many faces will it analyze? Currently, it’s designed to analyze one face per video but we aim to expand it to five faces in our next major update.