Centralized rules for Coding Agents like Claude Code, Github Copilot & Cursor. Your AI coding agent automatically picks the right rules per task. Ship enterprise-ready code at 10x speed.
Hey makers & creators, Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
Report
@lukas_holzer@peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
@lukas_holzer@peterbuch@mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
Report
@lukas_holzer@peterbuch@katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Hey, this looks amazing! Really useful concept, especially with regard to giving focussed context to an agent and for centralising rules across repos. I'd love to know how the tool selects the right rules to use and if there's any way to see which rules have been selected for a prompt?
@orinokai We took a completely different route here for rule matching as Cursor or others are doing.
Instead of going on a folder level or file extension to match rules, we've trained a machine learning pipeline to do the matching of the rules. This is based out of a variety of constraints. classifications, embeddings, labelings and so on. Basically we've tried to immitate the human brain! My brain does not work by locating knowledge based on a directory 😂
By that we can be super agnostic of repos and the developer don't have to recall where the rules are located they need!
When it comes to visualisation we currently fall a bit short. We just present you the output inside the terminal of Claude Code, Codex or Github Copilot! (You get a kind of validation report)
But we are planning on implementing a dashboard so you see exactly for which task which rules where applied and taken!
That's how we showcase it currently:
Report
This hits close to home. Coding agents are only as good as the context you give them, and right now that context lives in random markdown files scattered across repos. Having one source of truth that works across Cursor, Copilot, and Claude Code just makes sense.
@giammbo yep, a big IF the context lives in random markdown files, the sad truth is that a lot of companies don't have markdown files in there repos even. They have their rules in Confluence pages or scatter wikis, in the worst case they are stuck in the head of single developers that comment then on repos.
So with straion we try to help you extract those rules from existing sites/pages and even repositories. So to get you started quicker.
Report
@lukas_holzer That's a great point — the "rules stuck in someone's head" problem is real. Extracting from Confluence and existing sources sounds like the right approach to get teams onboarded fast. Smart move.
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Report
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!
Report
As a founder of a security consultancy, watching how quickly the AI and agentic movement has taken off has been incredible, but also has introduced new and interesting challenges in keeping the company safe!
I am super excited to see what Straion can do in keeping engineering teams moving quickly while keeping the codebase clean and company policies met!
@patrickfarwick Thanks! yea this whole thing is moving at light speed (or even warp speed?)
With straion we try to help devs to not have to go that pace and commit for one technology. we try to be a proxy managing all rules you you don't have to think about (skills, how to structure .md files so they are picked up best by the latest model, context engineering etc...) or even should I go with Cursor or Claude Code.
We are Provider agnostic and optimizing the rules internally so that they are best picked up by agents!
findable.
Hey makers & creators,
Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
@lukas_holzer @peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
Straion
@katrin_freihofner will tell you more from her product perspective!
Straion
@lukas_holzer @peterbuch @mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
@lukas_holzer @peterbuch @katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
Straion
@mangal_s07 Can you expand a bit on what you mean with Encoding "how we build here”? Not sure if I got that!
MCP-Builder.ai
Crongrats on the lunch. Totally see the need as i am often afraid that my coding Assistant is steadily drifitng away from our coding guidlines.
Am i also be able to setup different coding rules depending on the techstack of my project and teams? Web, python,... ?
Straion
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
Straion
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Netlify
Hey, this looks amazing! Really useful concept, especially with regard to giving focussed context to an agent and for centralising rules across repos. I'd love to know how the tool selects the right rules to use and if there's any way to see which rules have been selected for a prompt?
Straion
@orinokai We took a completely different route here for rule matching as Cursor or others are doing.
Instead of going on a folder level or file extension to match rules, we've trained a machine learning pipeline to do the matching of the rules. This is based out of a variety of constraints. classifications, embeddings, labelings and so on. Basically we've tried to immitate the human brain! My brain does not work by locating knowledge based on a directory 😂
By that we can be super agnostic of repos and the developer don't have to recall where the rules are located they need!
When it comes to visualisation we currently fall a bit short. We just present you the output inside the terminal of Claude Code, Codex or Github Copilot! (You get a kind of validation report)
But we are planning on implementing a dashboard so you see exactly for which task which rules where applied and taken!
That's how we showcase it currently:
This hits close to home. Coding agents are only as good as the context you give them, and right now that context lives in random markdown files scattered across repos. Having one source of truth that works across Cursor, Copilot, and Claude Code just makes sense.
Straion
@giammbo yep, a big IF the context lives in random markdown files, the sad truth is that a lot of companies don't have markdown files in there repos even. They have their rules in Confluence pages or scatter wikis, in the worst case they are stuck in the head of single developers that comment then on repos.
So with straion we try to help you extract those rules from existing sites/pages and even repositories. So to get you started quicker.
@lukas_holzer That's a great point — the "rules stuck in someone's head" problem is real. Extracting from Confluence and existing sources sounds like the right approach to get teams onboarded fast. Smart move.
Straion
@giammbo Thanks a lot we are still trying to figure out what's the best approach so every feedback is warmly welcomed!
Straion
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
Straion
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!
As a founder of a security consultancy, watching how quickly the AI and agentic movement has taken off has been incredible, but also has introduced new and interesting challenges in keeping the company safe!
I am super excited to see what Straion can do in keeping engineering teams moving quickly while keeping the codebase clean and company policies met!
Straion
@patrickfarwick Thanks! yea this whole thing is moving at light speed (or even warp speed?)
With straion we try to help devs to not have to go that pace and commit for one technology. we try to be a proxy managing all rules you you don't have to think about (skills, how to structure .md files so they are picked up best by the latest model, context engineering etc...) or even should I go with Cursor or Claude Code.
We are Provider agnostic and optimizing the rules internally so that they are best picked up by agents!