Centralized rules for Coding Agents like Claude Code, Github Copilot & Cursor. Your AI coding agent automatically picks the right rules per task. Ship enterprise-ready code at 10x speed.
Hey makers & creators, Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
Report
@lukas_holzer@peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
@lukas_holzer@peterbuch@mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
Report
@lukas_holzer@peterbuch@katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Report
Straion is badly needed. There is no way to centrally managed .md files, collaborate on them and dynamically update them across several repositories.
@panagiotis_papadopoulos Yea good point the updating! That's indeed a case a lot of companies don't think about!
They just think adding the rules once is enough. But what if you have 3 repos with the same frontend rules? You don't want to go into each repo and update the AGENTS.md or CLAUDE.md files there whenever you decide on new rules/guidances.
I'll bet they will be soon out of date!
Report
💡 Bright idea
Really strong concept. The ‘rules for AI agents’ angle is interesting — are you positioning this more as a governance layer for teams or as a productivity control tool for individual devs?
@richard_rucker_monteiro Thank you for your message, Richard! Straion is the context (or governance) layer and works best for software engineering teams with 100+ people. This is where the problem we are solving is most pronounced. Individual developers often use AGENT.MD or CLAUDE.MD files, or even write their own custom skills, but these approaches don’t scale well across larger teams.
Let me know if you’d like to go deeper into this topic.
@richard_rucker_monteiro we start currently as a productivity control (with aim for the governance layer). The point is if you use agentic development with multiple agents and you have to babysit them it does not feel like the promised 10x development productivity.
So we target that first helping you to get true 10x speed!
We are cooking already the next thing up here ;)
Report
This hits close to home. Coding agents are only as good as the context you give them, and right now that context lives in random markdown files scattered across repos. Having one source of truth that works across Cursor, Copilot, and Claude Code just makes sense.
@giammbo yep, a big IF the context lives in random markdown files, the sad truth is that a lot of companies don't have markdown files in there repos even. They have their rules in Confluence pages or scatter wikis, in the worst case they are stuck in the head of single developers that comment then on repos.
So with straion we try to help you extract those rules from existing sites/pages and even repositories. So to get you started quicker.
Report
@lukas_holzer That's a great point — the "rules stuck in someone's head" problem is real. Extracting from Confluence and existing sources sounds like the right approach to get teams onboarded fast. Smart move.
@giammbo Thanks a lot we are still trying to figure out what's the best approach so every feedback is warmly welcomed!
Report
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!
Straion
Hey makers & creators,
Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
@lukas_holzer @peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
Straion
@katrin_freihofner will tell you more from her product perspective!
Straion
@lukas_holzer @peterbuch @mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
@lukas_holzer @peterbuch @katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
Straion
@mangal_s07 Can you expand a bit on what you mean with Encoding "how we build here”? Not sure if I got that!
MCP-Builder.ai
Crongrats on the lunch. Totally see the need as i am often afraid that my coding Assistant is steadily drifitng away from our coding guidlines.
Am i also be able to setup different coding rules depending on the techstack of my project and teams? Web, python,... ?
Straion
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
Straion
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Straion
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Straion is badly needed. There is no way to centrally managed .md files, collaborate on them and dynamically update them across several repositories.
Looking forward to what the team will build!
Straion
@panagiotis_papadopoulos Yea good point the updating! That's indeed a case a lot of companies don't think about!
They just think adding the rules once is enough. But what if you have 3 repos with the same frontend rules? You don't want to go into each repo and update the AGENTS.md or CLAUDE.md files there whenever you decide on new rules/guidances.
I'll bet they will be soon out of date!
Really strong concept. The ‘rules for AI agents’ angle is interesting — are you positioning this more as a governance layer for teams or as a productivity control tool for individual devs?
Straion
@richard_rucker_monteiro Thank you for your message, Richard! Straion is the context (or governance) layer and works best for software engineering teams with 100+ people. This is where the problem we are solving is most pronounced. Individual developers often use AGENT.MD or CLAUDE.MD files, or even write their own custom skills, but these approaches don’t scale well across larger teams.
Let me know if you’d like to go deeper into this topic.
Straion
@richard_rucker_monteiro we start currently as a productivity control (with aim for the governance layer). The point is if you use agentic development with multiple agents and you have to babysit them it does not feel like the promised 10x development productivity.
So we target that first helping you to get true 10x speed!
We are cooking already the next thing up here ;)
This hits close to home. Coding agents are only as good as the context you give them, and right now that context lives in random markdown files scattered across repos. Having one source of truth that works across Cursor, Copilot, and Claude Code just makes sense.
Straion
@giammbo yep, a big IF the context lives in random markdown files, the sad truth is that a lot of companies don't have markdown files in there repos even. They have their rules in Confluence pages or scatter wikis, in the worst case they are stuck in the head of single developers that comment then on repos.
So with straion we try to help you extract those rules from existing sites/pages and even repositories. So to get you started quicker.
@lukas_holzer That's a great point — the "rules stuck in someone's head" problem is real. Extracting from Confluence and existing sources sounds like the right approach to get teams onboarded fast. Smart move.
Straion
@giammbo Thanks a lot we are still trying to figure out what's the best approach so every feedback is warmly welcomed!
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
Straion
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!