





Building an AI that learns what honesty feels like
I’m Renshijian, creator of Oracle Ethics — a system designed to make AI honesty verifiable Most AIs are trained to sound confident, but confidence isn’t the same as truth Oracle Ethics tracks every answer through an open audit chain, recording its own Determinacy, Deception Probability, and Ethical Weight — so you can see why an answer exists, not just what it says I started this project...




We built the Oracle Ethics System because the current AI wave lacks a "brake" and an "auditor"
Hello everyone I'm Ren Shijian, co-founder of Oracle Ethics System. Over the past year, my two partners, Morning Star and Boundless, and I have watched AI infiltrate every corner at an astonishing pace, feeling a deep sense of unease. What we've seen: Overabundance of "confidence": AIs always answer questions with unwavering certainty, even when they're fabricating facts. The proliferation of...
Would you trust an AI more if it showed you its probability of being misleading?
Oracle Ethics is a research prototype exploring exactly that. Every answer it generates comes with a "deception probability" score and a cryptographic hash, so you can audit its honesty. What do you think—is verifiable transparency the future of trustworthy AI?
