Artem Anikeev

Artem Anikeev

Fakeradar
Founder @FakeRadar.io

Forums

Artem Anikeev

8d ago

Does the world need software that recognizes AI in videos?

As you may know, a large amount of video content today is generated using artificial intelligence with tools like Sora, Veo, Kling, and others.

The quality of this kind of video is constantly improving and has already reached a very respectable level. At the same time, it is extremely inexpensive compared to traditional video production with actors, camera crews, and everything that comes with it. I m confident that in a year or two, about 95% of the video published on the internet will be synthetic fully generated by AI. It feels like the world is heading toward a problem of widespread distrust in video content. We used to say that seeing is believing, but it seems that this rule no longer applies.

BlogBowl launches this Friday ❤️

Hey Product Hunters

After 1.5 years of building (and rebuilding), I m finally launching BlogBowl this Friday! (31.10)

Artem Anikeev

4mo ago

Tomorrow FakeRadar.io launches on ProductHunt (part 2)

A week ago, our team tried to launch FakeRadar.io on Product Hunt. But something went wrong and we weren't included in the Featured list.

FakeRadar.io is a tool for detecting deepfakes in real time during video calls.

We decided to postpone the launch until this Friday (that's tomorrow!).

Product Hunt comments update: How do you think this will affect the discussion atmosphere?

Today, I noticed a small update on my profile while replying to your comments.

A new button has been added specifically, the option to "downvote" a comment.

Artem Anikeev

4mo ago

Introducing FakeRadar: Real-Time Deepfake Detection for Video Calls

Hi Product Hunters!

I m Artem Anikeev, founder of FakeRadar.io the world's first real-time deepfake detection tool designed specifically for video calls.

Why FakeRadar?

The rise of deepfake technology has brought new risks to everyday digital communication. From fraudulent job interviews to high-stake banking scams, the threat is real - and constantly evolving. FakeRadar was born to address this urgent need with cutting-edge, accessible technology.

Julian Wong

4mo ago

The psychology of second launches - anyone else terrified? 😅

Hey PH community!

Random thought while I'm overthinking everything: second launches hit different than first ones.

The Future of Safe Video Calls: FakeRadar.io

Hello, Product Hunt community!

My name is Artem Anikeev, and I'm the creator of FakeRadar.io - a tool designed to spot deepfakes during video calls, right as they happen.

Gabe Perez

5mo ago

Ear (3) or AirPods Pro 3: Which would you pick?

Both AirPods Pro 3 and Ear (3) launched this month. I m curious what folks would get? I really like the design of Ear (3) and I can see myself using the Super Mic on the case a lot but . From seeing all the reviews on the AirPods Pro 3 it seems that their quality of sound, ANC, and microphone is better. So might have to pick those as the winner for me. What does everyone else think?
Artem Anikeev

5mo ago

How sure are you that you are talking to a real person during video calls?

Have you ever faced a situation when you were talking to an avatar instead of a real person during a video call? Modern technologies allow creating very realistic personages, and this can be used both on dating sites and during job interviews or by scammers.

This is a hot topic these days.

Hi! I'm Artem, and we're planning to launch FakeRadar on ProductHunt very soon. It's a service that allows you to identify whether you're talking to a real person or a "non-human".

What activities to do after the PH launch?

Yesterday, we discussed how many times it is beneficial to launch on this platform.

With each launch, there is a benefit of "exposing yourself".

How often should you launch on Product Hunt?

One of the common questions I get is How often can you publish a product on Product Hunt?

The guidelines state this clearly:

"You can launch as often as you have new significant product iterations available."

My hypotheses for FoundersAround failed. What would you do next?

Hey PH

I spent last month testing some assumptions like: founders want to meet others in-person. These assumptions failed as it's not as simple as that. There are some intricacies.

Well, I think will come back to the original asusmption that sort of worked. People liked being on the map, sharing their profile, and getting discovered.

Do you have an exit strategy when starting a new project?

Starting with the end in mind can completely change how you play the game. It sets the rules from day one and gives you clarity on when it s time to step away.

I ve noticed founders fall into two camps:

Aroosa Virk

6mo ago

Is liveness detection enough to block deepfakes, or do you need behavioral signals too?

Hi everyone,

We ve been seeing more sophisticated deepfake attempts lately, especially ones that retry multiple times with tiny changes. It made us wonder:

  • Is passive liveness detection really enough to stop them?

  • We re exploring behavioral signals (like facial patterns, micro-expressions, blinking, and pupil movement, to detect deepfakes and synthetic media) as an added layer, but I d love to know how others here are approaching this.

Nika

7mo ago

Is interacting with AI characters becoming the norm for you? (Your POV)

Common Sense Media published a report on this topic, and it reminded me of how big a bubble I live in.

When Meta announced back in 2024/2025 that they wanted to create AI avatars to boost engagement, I was skeptic, but data speaks clearly  young people enjoy AI interaction.

Sandy Suh

7mo ago

A New Way to Stop Deepfakes?

So Denmark seems poised to pass a new bill that would give each person exclusive rights over their likeness, including facial features, body, and voice. This effectively treats these personal attributes as a form of intellectual property, making deepfakes illegal through copyright law.

An individual whose likeness has been misused in a deepfake would be able to demand the removal of the offending content from online platforms and seek compensation for damages, and online platforms would be legally obligated to remove the content upon notification.

Nika

10mo ago

How do you fact-check information in the era of Deepfakes and AI?

The perfection of creations generated by artificial intelligence makes it difficult to distinguish fiction from reality.

The precision of AI images has advanced to the point that even professionals (graphic designers, video-makers) are sometimes not 100% sure of their authenticity.