All activity
Felix Christoleft a comment
Hey Product Hunt! 👋 I built Mdify after burning through API credits feeding LLMs bloated web pages. The problem: When you paste a URL into Claude or ChatGPT, you're also feeding it: • Cookie banners • Navigation menus • Ads and tracking scripts • Footer junk All noise. Zero signal. I ran the numbers: Clean Markdown vs raw web links = 50-60% fewer tokens. That's not just cost savings—it's better...

MDifyClean Markdown for LLMs—cut tokens by 50-60%
Felix Christoleft a comment
Why GuardSkills exists 👇 The skills.sh ecosystem is powerful, but it has a blind spot: anyone can publish a skill, and those skills run with your environment, files, and secrets. There’s no review layer like an App Store. GuardSkills adds that missing checkpoint. It analyzes skills before execution to detect risky behavior (dangerous shell commands, filesystem writes, network access, secret...

GuardSkillsScan skills before install. Ship safer AI workflows.
I ran an experiment: Website link vs Markdown input.
Result: 50-60% fewer tokens when using Markdown.
Why links fail:
• UI junk bloats context
• Ads, headers, cookies waste tokens
• More tokens = higher cost + worse reasoning
Mdify converts web pages into clean .md so LLMs get only signal, not noise. Perfect for Claude, ChatGPT, Gemini, and AI agents.
If you care about cost, accuracy, or building agents—this matters.

MDifyClean Markdown for LLMs—cut tokens by 50-60%
GuardSkills is a lightweight security layer for the skills.sh ecosystem that analyzes skills before execution to reduce trust risk. It inspects metadata and behavior to flag dangerous commands, suspicious network access, and secret exfiltration. Built specifically for skills.sh workflows, it applies context-aware policies and supports custom allow/deny rules. Install via npm, add one guard step, and gain visibility

GuardSkillsScan skills before install. Ship safer AI workflows.
