Forg3t Protocol

How should AI systems prove that they forgot data?

by

We’re working on Forg3t Protocol, a system focused on verifiable AI unlearning.

One question we keep running into is this:
Most AI systems claim they can delete or forget data, but very few can actually prove it in a way regulators or auditors would accept.

Today, “forgetting” usually means retraining, policy statements, or internal assurances. That feels fragile as regulatory pressure increases.

For those building or deploying AI systems:

  • What kind of evidence would you trust to confirm that a model actually forgot specific data?

  • Is behavioral testing enough, or do you expect cryptographic or third party verification?

  • How should this be evaluated in real world audits?

Curious to hear perspectives from people building AI under compliance or governance constraints.

5 views

Add a comment

Replies

Be the first to comment