All activity
Paolo Rossileft a comment
Congrats on your launch! One thing I’m curious about: you say output tokens drop 63% with pre indexed context, does that mean the actual code quality changes too or is it just less “thinking out loud” from the model? Because if the model is writing better code when it gets cleaner input, that’s a much bigger deal than just cost saving.

VexpCut AI coding costs 58% with pre-indexed codebase context
