Cencurity v2 — No terminal, no config. One click to enable.
Sharing an update for anyone who read the original post. The initial version required running a Docker image manually, logging in with a bootstrap key, and entering the LLM base URL by hand. We were aware the setup barrier was high, and received the same feedback directly. We made three changes. First, the Docker setup and bootstrap login flow have been removed. Second, manual LLM configuration...
Cencurity Engine Open Source Release
a lot of AI coding discussions focus on before or after like prompts or code review but the actual generation step itself feels like a blind spot that’s what cencurity is trying to address adding a layer that works during generation, not just before or after if you're building with AI coding tools, would love to hear how you're thinking about this...


Cencurity Engine – Security for AI-generated code (CAST)
Shipping AI tools is getting weird. We tried to release a VSCode extension. It kept hitting 500 errors. No explanation. Maybe just a glitch. Maybe something else. Either way, we stopped waiting. So we shipped the engine instead. Cencurity Engine (CAST) A streaming security engine for AI-generated code. Instead of scanning code after it's written, Cencurity runs inline with code generation and...
Cencurity-VS Code Extension
Most security tools scan code after it's written. Cencurity stops insecure code as it's generated. Real-time security for AI-generated code. Cencurity runs as a proxy between your IDE and LLM, analyzing outputs and blocking risky patterns before they are used. Works with OpenAI, Claude, Gemini, and more Test and verify your security protection instantly Detects and blocks risky code in...
Hey PH — Park, product engineer building a security layer for AI coding tools
Hey everyone 👋 I’m Park, a product engineer currently building something around AI code security. I’ve been using tools like Cursor and Claude a lot, and honestly, the speed is insane. But after a while, I started noticing something: The code often looks correct on the surface, but if you slow down and inspect it, there are subtle issues hidden inside — things that are easy to miss if you trust...
Are devs trusting AI-generated code too much?
Most security tools scan code after it's written. Cencurity stops insecure code as it's generated. Real-time security for AI-generated code. Cencurity runs as a proxy between your IDE and LLM, analyzing outputs and blocking risky patterns before they are used. Works with OpenAI, Claude, Gemini, and more Test and verify your security protection instantly Detects and blocks risky code in...
Cencurity progress update
Cencurity progress update: IDE extensions (VS Code / Cursor) in progress Team SaaS version ready, waiting for payment API approval Community version updated on GitHub yesterday (security hardening + proxy improvements) Next step: real developer adoption. Community version:https://github.com/cencurity/cencurity
Building the IDE extension layer for Cencurity
Working on the IDE extension layer for Cencurity. Architecture: IDE / Agents ↓ Cencurity Gateway ↓ LLM providers (OpenAI, Anthropic, Gemini, DeepSeek…) Goal: give teams visibility and control over LLM traffic. The upcoming VSCode / Cursor extension will: • auto-configure the proxy • surface threat alerts in the IDE • link directly to audit logs The team version is now complete and currently...
Building the Team Version of Cencurity (Central Policies + Audit Export)
We’re currently building the team version of Cencurity. The first version focused on protecting individual AI/LLM usage through a security proxy layer. Now we’re extending it toward team-wide control and visibility. New additions in progress: • Central policy enforcement across tenants • Tenant-level activity visibility • Exportable audit logs (CSV) • Aggregated threat scoring The goal is...
Building the Team version of Cencurity
Quick update after launch: We’re now building the Team version of Cencurity. What we noticed from the comments and DMs is that teams aren’t just worried about model performance — they’re worried about auditability and control. The next step is central policy enforcement + per-tenant visibility + real-time threat scoring. Still early. Iterating fast. Appreciate all the feedback so far.


