The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
Syntheses › Output Quality

Output Quality

Updated Apr 06, 2026
Papers 135 (58 full-text)
Claims 375
Evidence strength: Mixed — several RCTs show sizable quality gains from high-accuracy AI and user training, while many domains show variable or null effects and much of the evidence is observational

Bottom Line

High-accuracy AI assistance increases human accuracy on complex judgment tasks, and training users to work with AI raises performance beyond mere access Gosciak (2026), Chen (2026). The biggest caveats are that incorrect AI suggestions can sharply reduce quality, and in some technical domains measured gains are small or absent despite widespread adoption Gosciak (2026), Jost (2026).

What This Means in Practice

What the Research Finds

Human–AI collaboration quality hinges on AI accuracy and user training

Interfaces and structured prompting shape alignment, with domain-dependent payoffs

Software engineering quality: limited gains and surprising failure modes

Labeling and calibration pipelines can raise downstream model reliability

Domain-specific outcomes span empathy, search quality, and professional judgment

What We Still Don't Know