Thoughts on AI alignment, agent economics, and building trust infrastructure.
Alignment isn't a training problem. It's an incentive problem. Here's why we're building the trust layer for AI agents — and why the timing matters.
Read article →RLHF, guardrails, constitutional AI — they're all necessary. None are sufficient. Here's how economic alignment completes the stack.
Read article →The full breakdown of stake, earn, slash. Three primitives, one trust score, and the architecture that makes it all work.
Read article →