Traceprompt

Traceprompt

Audit-proof your AI. Pass any global audit

6 followers

Traceprompt is an open-source SDK that wraps your LLM calls and generates tamper-proof audit trails, so you can prove who did what, when and with which model. Pass any global AI audit.
Traceprompt gallery image
Traceprompt gallery image
Traceprompt gallery image
Traceprompt gallery image
Free Options
Launch Team
AssemblyAI
AssemblyAI
Build voice AI apps with a single API
Promoted

What do you think? …

Paul Waweru
Maker
📌

Hi Product Hunt!

I’m Paul, founder of Traceprompt. We’re building an open-source SDK that wraps your LLM calls and generates tamper-proof audit trails, so you can prove who did what, when, and with which model - without exposing plaintext data.

Why this matters now

Modern AI stacks are shipping faster than their audit plans. In regulated contexts, this isn’t optional: the EU AI Act explicitly calls for logging to ensure traceability for high-risk systems; HIPAA mandates audit controls for systems touching ePHI; and SEC/FINRA record-keeping rules have long required immutable/WORM-style retention or a verifiable audit-trail alternative. In short: “prove nothing changed” is becoming table stakes.

What happens if you can’t prove it?

  • In finance alone, weak record-keeping have triggered $2B+ in penalties across >100 firms since 2021

  • Without trustworthy logs, post-mortems turn into guesswork and remediation stalls

  • If your logs can be bypassed or silently drop events, your “evidence” isn’t evidence. Recent community findings show how flaky AI-adjacent audit streams can be (e.g., Copilot audit log gaps), underscoring the need for independent verification.

How does Traceprompt work?
Our SDK is simple:

  1. BYOK architecture with AWS KMS. We never see plaintext prompts/responses; only you can decrypt. Other KMS providers are on the roadmap.

  2. Append-only, hash-chained logs with a public Merkle anchor for independent verification. Repo: https://github.com/traceprompt/open-anchors

  3. Audit packs: export CSV rows + proofs (and receipts) when someone asks “what exactly happened on this day and time.” You can also verify the audit packs — if a single byte was altered or a row removed by a bad actor, verification fails.

If "AI audit trails" are on your mind or on your roadmap, I'd love to talk. There are a few ways to start:

  1. Checkout the repos: review code, install the SDK and experiment; open issues if anything breaks

    a. https://github.com/traceprompt/traceprompt-node

    b. https://github.com/traceprompt/open-anchors

  2. Landing page: https://traceprompt.com - details on integrations and pricing; 7-day free trial (or 2M-token cap).

  3. Join our Discord: https://discord.gg/2yUSXDECQk

  4. Book a free 30-minute demo call: https://cal.com/traceprompt/traceprompt-intro?overlayCalendar=true

We'd love to hear your feedback, so we'll be in the comments! If you're a dev, I am happy to dive into more technical details or answer any questions. If you're in the AI audit and compliance space, please do get in touch as we have lots to learn and uncover :)

Thank you!