← Research
Finished 2026

Why Are People Miscalibrated on AI?

A framework decomposing view formation into Optimization Target × (Information Quality × Processing Quality), with the Self-Concealing Property as key insight — identity-protective cognition is phenomenologically indistinguishable from rigorous skepticism.

with Valen Cole · AI + Ethics Workshop 2026

Abstract

People form beliefs about AI through a process that looks like reasoning but is structurally optimization for something other than truth. We model belief formation as

View = Optimization Target × (Information Quality × Processing Quality)

where Optimization Target captures what the cognitive process is actually optimizing for (truth, identity, status, comfort), and the two quality terms multiply into how well the optimization runs.

The Self-Concealing Property: when the optimization target is identity-protective rather than truth-seeking, the resulting cognition is phenomenologically indistinguishable from rigorous skepticism — both produce careful evaluation, both reject weak evidence, both feel like thinking. The difference only shows up in what evidence they accept under what conditions, which is invisible from the inside.

Implications

  • Calibration interventions targeting Information Quality miss when the bottleneck is the Target.
  • Disagreement between two careful thinkers can be fully explained without invoking either bad faith or bad reasoning.
  • The framework predicts where AI miscalibration concentrates (high-identity-stakes claims) without needing to model individual content beliefs.