Back to Lab
Research Pillar

Human-System Trust

Designing how humans shape, inspect, and govern a living computational system. When software is alive and continuously generated, new trust models are needed.

Current Frontier

Governance models for AI software: how do humans maintain meaningful control over a system that generates its own behavior?

Key Questions

01

How do you inspect and audit software that doesn't exist as static code?

02

What governance models work for living software?

03

How do you build trust in a system that behaves probabilistically?

Key Papers

Is Sora a World Simulator?

May 2024

Challenges the claim that visual quality = world understanding. Critical framing for trust calibration.

Read paper

Critiques of World Models as Planning

Jul 2025

Honest assessment of failure modes — shallow coherence, error explosion, generality limits.

Read paper

World-in-World: Closed-Loop WM Evaluation

ICLR 2026 Oral

Benchmarking infrastructure for evaluating world models. Trust requires measurability.

Read paper

Current Insights

Trust in The Last Computer is fundamentally different from trust in traditional software. You can't read the source code because there is no source code.

The intent contract becomes the auditable artifact. Instead of inspecting code, you inspect the intent specification and verify the experience matches.