LocalLiveAvatar

LocalLiveAvatar

Instant Live Avatars with Lip-Sync on Everyday Hardware

11 followers

Unlike other solutions, LocalLiveAvatar runs on everyday hardware—no high-end GPU or cloud required. Response time is instantaneous—whether the avatar speaks for 2 seconds or 20 minutes. No expensive hardware. No cloud. Just your secure data. An avatar can be generated from almost any video or photo. Once created, it can voice any text or audio with perfect lip synchronization in any language.
LocalLiveAvatar gallery image
LocalLiveAvatar gallery image
LocalLiveAvatar gallery image
LocalLiveAvatar gallery image
Launch Team

What do you think? …

Alexander Radzhabov
I developed this technology to help people with disabilities—especially those who have lost their voice or their original appearance—by creating free, personalized digital clones (voice + avatar). Using a Telegram bot, it gives them a way to communicate and reconnect. Beyond its social impact, I believe it holds significant commercial potential in sectors like robotics, customer service, entertainment, and personalized AI assistants. And who knows?: -Perhaps a product like this could even contribute to the development of a secret OpenAI project alongside designer Jony Ive—since now the most resource-intensive part of avatar creation can run directly on the user’s device. -Maybe robot manufacturers would take interest, too—this could make their robots more engaging, expressive, and emotionally appealing in human interactions. We could even imagine avatar marketplaces, custom avatars for brands, and much more. To prevent the serious risk of real-time deepfake misuse, I have chosen not to release the source code publicly. My goal is to ensure this technology empowers people rather than causes harm. For this reason, I am seeking commercial partnerships with vetted institutions and companies that are committed to ethical and transparent use.
Alexander Radzhabov

I'm ready to: provide a detailed demo, answer all your questions.
For interested parties, I can arrange a demonstration as follows: we connect via a conference call, you send your audio file to me through my Telegram bot, and we immediately see live avatar speaking your text.

Alexander Radzhabov

LocalLiveAvatar runs on everyday hardware—no high-end GPU or cloud required.
For example:

  • On a mobile CPU (AMD Ryzen 9 7845HX), the system produces approximately 1.3 seconds of avatar video per second of CPU time.

  • On a modest mobile GPU (NVIDIA GeForce RTX 5070 8GB), that jumps to 5.3 seconds of video per second of processing time.