As you may know, a large amount of video content today is generated using artificial intelligence with tools like Sora, Veo, Kling, and others.
The quality of this kind of video is constantly improving and has already reached a very respectable level. At the same time, it is extremely inexpensive compared to traditional video production with actors, camera crews, and everything that comes with it. I m confident that in a year or two, about 95% of the video published on the internet will be synthetic fully generated by AI. It feels like the world is heading toward a problem of widespread distrust in video content. We used to say that seeing is believing, but it seems that this rule no longer applies.
I m Artem Anikeev, founder of FakeRadar.io the world's first real-time deepfake detection tool designed specifically for video calls.
Why FakeRadar?
The rise of deepfake technology has brought new risks to everyday digital communication. From fraudulent job interviews to high-stake banking scams, the threat is real - and constantly evolving. FakeRadar was born to address this urgent need with cutting-edge, accessible technology.
Both AirPods Pro 3 and Ear (3) launched this month. I m curious what folks would get? I really like the design of Ear (3) and I can see myself using the Super Mic on the case a lot but . From seeing all the reviews on the AirPods Pro 3 it seems that their quality of sound, ANC, and microphone is better. So might have to pick those as the winner for me. What does everyone else think?
Have you ever faced a situation when you were talking to an avatar instead of a real person during a video call? Modern technologies allow creating very realistic personages, and this can be used both on dating sites and during job interviews or by scammers.
This is a hot topic these days.
Hi! I'm Artem, and we're planning to launch FakeRadar on ProductHunt very soon. It's a service that allows you to identify whether you're talking to a real person or a "non-human".
I spent last month testing some assumptions like: founders want to meet others in-person. These assumptions failed as it's not as simple as that. There are some intricacies.
Well, I think will come back to the original asusmption that sort of worked. People liked being on the map, sharing their profile, and getting discovered.
Starting with the end in mind can completely change how you play the game. It sets the rules from day one and gives you clarity on when it s time to step away.
We ve been seeing more sophisticated deepfake attempts lately, especially ones that retry multiple times with tiny changes. It made us wonder:
Is passive liveness detection really enough to stop them?
We re exploring behavioral signals (like facial patterns, micro-expressions, blinking, and pupil movement, to detect deepfakes and synthetic media) as an added layer, but I d love to know how others here are approaching this.
Common Sense Media published a report on this topic, and it reminded me of how big a bubble I live in.
When Meta announced back in 2024/2025 that they wanted to create AI avatars to boost engagement, I was skeptic, but data speaks clearly young people enjoy AI interaction.
So Denmark seems poised to pass a new bill that would give each person exclusive rights over their likeness, including facial features, body, and voice. This effectively treats these personal attributes as a form of intellectual property, making deepfakes illegal through copyright law.
An individual whose likeness has been misused in a deepfake would be able to demand the removal of the offending content from online platforms and seek compensation for damages, and online platforms would be legally obligated to remove the content upon notification.
The perfection of creations generated by artificial intelligence makes it difficult to distinguish fiction from reality.
The precision of AI images has advanced to the point that even professionals (graphic designers, video-makers) are sometimes not 100% sure of their authenticity.