This is not an AI assistant.
It does not recommend actions.
It does not optimize behavior.
It's a causal safety engine designed to validate causal evidence and block unsafe automation in high-risk AI systems.
When causal identifiability is insufficient, the engine intentionally produces no output.
Silence is treated as a correct and safe outcome.