LieTector AI

LieTector AI

Decode emotions. Detect what’s left unsaid. Hire smarter.

7 followers

LieTector AI is an early prototype that highlights subtle speech and emotion cues in Russian interview snippets. Our end goal: full‑length, multi‑language sessions with real‑time insights. Join the beta and help shape what’s next.
LieTector AI gallery image
LieTector AI gallery image
LieTector AI gallery image
Free
Launch Team / Built With
Anima - OnBrand Vibe Coding
Design-aware AI for modern product teams.
Promoted

What do you think? …

Eliyahu Dorman
LieTector AI — early RU‑only prototype (backend spins up on‑demand, so ping us!) It's our no‑filter prototype that listens between the lines. Drop in a candidate’s interview and we’ll spotlight filler words, shaky phrasing, emotional spikes, and other moments that may deserve a second look from a trained recruiter. Heads‑up: the backend is normally powered down to save GPU hours. We turn it on manually for anyone who wants to test. For now it handles only very short audio clips (≈ 5 min) — perfect for a sample question‑and‑answer. How it works • Speech‑to‑text — openai/whisper‑large‑v3 • Emotion layer (RU) — KELONMYOSA/wav2vec2‑xls‑r‑300m‑emotion‑ru • Speaker diarization — pyannote/speaker‑diarization • Insight engine — IlyaGusev/saiga_llama3_8b Today the backend lives on a single A100 inside a Google Colab notebook; the frontend is stitched together on Replit. It’s messy, it’s beta, and that’s on purpose — we’re validating whether interview‑signal analytics like this matter and looking for the first believers to build with us. Why care? HR teams spend hours reading between the lines. We compress those hours into a one‑page map of confidence, anxiety and conversational style. We’re opening this sandbox to recruiters, partners and curious investors who’d like to kick the tires, break things, and steer what comes next. Disclaimer: LieTector AI provides indicators, not verdicts; the final decision always rests with the human expert.