How do you control the increased entropy from vide coding?
Hello everyone, recently I am refactoring old AI codes in hot spots. I used both Claude Code and Codex to quickly implement the feature at that time. It works but when I look into the logic today, it introduces too much unnecessary complexity (a lot of helper/manager/try-except). Although I have manipulated CLAUDE.md (emphasizing KISS principle, introduce Linus https://gist.github.com/iiiyu/4c8286062c589f3f6d6093cb9fecbe42 ), the code agent still try to add entropy on the whole project. I can understand the code LLM is trained to program defensely, but if I do not review carefully and really understand the logic, the project quickly becomes hard to maintain. Now every week I will leave 1 day to write codes without AI to clean the whole project for longer future.
Do you have similar experiences or solutions to share?

Replies
I’ve run into the same entropy problem with AI-written code.
My only reliable fix so far: keep the model on a very tight scope and only let it touch one file or one function at a time.
The moment I let it “think across the repo,” it starts inventing abstractions I never asked for.
Curious if anyone has found a workflow where the model respects architecture boundaries consistently?
Creaibo
@kelsey_zhang I also tried a lot of workflows like CCPM (https://github.com/automazeio/ccpm) to plan first and decompose into smaller tasks. The proposed plan looks good but when code agent really implement it, codebase quickly becomes out of control. Now a possible way I found is to plan first and let LLM ask me every detail in advance to avoid extra abstractions. It works to some extend but I need to review the plan carefully and sometimes I feel it more efficient to program by myself. Really need a LLM code agent can clone my coding habit.
This happens to me as well when context fills up. My strategy is to keep context as small as possible and it follows guide lines better.
Creaibo
@mbbuilds Agreed. Context is an important factor. When using Codex, I agressively compact the conversation when completing a small feature, controlling the used context length under 20 % to keep best performance. Sometimes when the LLM agent cannot understand my requirements after several rounds of correcting, I would rather clean history and start a new conversation.