The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

How you reveal AI matters more than whether it exists: four disclosure modes—No AI, Hidden, Translucent, Visible—trade off accountability, autonomy and coordination costs; the paper maps this design space and supplies a lab instrument to test its effects on team reasoning and authorship.

Who Gets Credit? Operationalizing AI Disclosure as Epistemic Coordination in Human-AI Teams
Hanjing Shi, Dominic DiFranzo · Fetched April 17, 2026 · Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems
semantic_scholar theoretical n/a evidence 7/10 relevance DOI Source
The paper argues that how AI assistance is disclosed — not merely whether AI is present — shapes authorship, accountability, and coordination in collaborative work, and it introduces a four-fold disclosure taxonomy plus an experimental instrument to study these effects.

As generative AI becomes an ambient presence in collaborative work, a new social ambiguity emerges around authorship and responsibility. This condition of authorship uncertainty reshapes how teams attribute ideas, negotiate accountability, and coordinate collective reasoning. Prior research often treats AI presence as binary, framing it either as a hidden tool or a visible teammate. We argue that what matters in practice is the design of disclosure: how systems reveal, signal, or conceal AI assistance within collaboration. We introduce an AI Disclosure Design Space that conceptualizes disclosure as an epistemic coordination mechanism, articulating four configurations—No AI, Hidden AI, Translucent AI, and Visible AI—each trading off among accountability, autonomy, and coordination cost. We further contribute a research instrument that operationalizes these configurations in a collaborative chat setting and articulate testable design conjectures. By framing disclosure as epistemic infrastructure, this work outlines a conceptual roadmap for future empirical and design research on Human–AI collaboration.

Summary

Main Finding

Disclosure design — how systems reveal, signal, or conceal AI assistance — is the critical factor shaping authorship, accountability, and coordination in collaborative work. The paper proposes an AI Disclosure Design Space with four configurations (No AI, Hidden AI, Translucent AI, Visible AI) and frames disclosure as an epistemic coordination mechanism that trades off accountability, autonomy, and coordination cost. It also provides a research instrument to operationalize and test these configurations in collaborative chat settings.

Key Points

  • Authorship uncertainty grows as generative AI becomes ambient in teamwork; this affects attribution of ideas, responsibility for outcomes, and collective reasoning.
  • Prior work has treated AI presence as binary (hidden tool vs. visible teammate); real-world effects depend on disclosure design.
  • The authors introduce an AI Disclosure Design Space with four configurations:
    • No AI: the system does not use AI.
    • Hidden AI: AI is used but not revealed to collaborators.
    • Translucent AI: AI assistance is signaled or annotated partially.
    • Visible AI: AI presence and contributions are clearly surfaced and attributable.
  • Each configuration involves trade-offs among:
    • Accountability (who is responsible for outputs),
    • Autonomy (human control and agency),
    • Coordination cost (effort to align shared understanding and reasoning).
  • The paper treats disclosure as epistemic infrastructure — a mechanism that enables teams to form shared beliefs about sources, provenance, and reliability of ideas.
  • The authors provide a research instrument that operationalizes these four disclosure configurations in a collaborative chat environment and state testable design conjectures for future empirical work.

Data & Methods

  • Conceptual/theoretical development: formalization of an AI Disclosure Design Space and articulation of the trade-offs among accountability, autonomy, and coordination cost.
  • Research instrument: a designed experimental setup for collaborative chat that implements the four disclosure configurations (No AI, Hidden, Translucent, Visible) so researchers can manipulate disclosure as an independent variable.
  • Empirical approach implied (but not executed in this paper): controlled experiments or lab studies using the instrument to measure outcomes such as idea attribution, perceived responsibility, decision quality, coordination effort, and changes in team reasoning processes.
  • Proposed metrics (implied by the design): rates of idea attribution to humans vs. AI, self-reported and observed accountability behaviors, measures of team coordination time/overhead, autonomy indicators (e.g., acceptance/rejection of suggestions), and downstream performance/quality.
  • The paper articulates testable conjectures about how each disclosure configuration will affect these metrics, enabling future experimental and field research.

Implications for AI Economics

  • Productivity and output quality: Disclosure design can alter how teams use AI suggestions (adoption rates, reliance), affecting measured productivity and the quality of collaborative outputs — relevant for estimating AI-driven productivity gains.
  • Allocation of responsibility and liability: Differences in disclosure affect who is seen as accountable for decisions and errors, with implications for contract design, insurance, liability rules, and firm risk exposure.
  • Labor market and task allocation: Visibility of AI assistance may shift task boundaries (which tasks are delegated to humans vs. AI), influencing demand for skills, job design, and wages for collaborative work.
  • Coordination costs and transaction frictions: Designs that reduce epistemic ambiguity (e.g., Visible or Translucent AI) can lower coordination frictions but may increase monitoring costs; these trade-offs influence organizational choices and the economics of adoption.
  • Incentives and moral hazard: Hidden AI can create moral hazard or misattribution of credit; Visible disclosure may change incentives for effort, supervision, and verification, which matters for designing compensation and governance structures.
  • Diffusion and competition: Platform and product-level disclosure norms could become a competitive dimension; regulation or standards requiring disclosure would shape market structure and incumbents’ strategies.
  • Empirical research opportunities for economists:
    • Field experiments or randomized controlled trials using the provided instrument to estimate causal impacts of disclosure on productivity, error rates, and compensation outcomes.
    • Structural models linking disclosure regimes to firm-level decisions (hiring, monitoring, liability), enabling welfare analysis and policy counterfactuals.
    • Natural experiments from policy changes or platform feature rollouts that mandate or alter disclosure practices.
  • Policy relevance: Findings can inform regulation on AI transparency, disclosure requirements in high-stakes settings, and rules for attributing authorship and liability in AI-augmented work.

Assessment

Paper Typetheoretical Evidence Strengthn/a — The paper is conceptual and does not present empirical tests or causal estimation; it offers a framework and a proposed research instrument but no data-based evidence. Methods Rigormedium — Theoretical contribution is clearly structured: it synthesizes prior literature, defines a four-fold disclosure taxonomy, and operationalizes configurations via a proposed collaborative-chat instrument; however, there is no empirical validation, robustness checks, or application to real-world data. SampleNo empirical sample; the manuscript develops a conceptual AI Disclosure Design Space and provides a laboratory-style research instrument (collaborative chat scenarios) to operationalize four disclosure conditions (No AI, Hidden AI, Translucent AI, Visible AI). Themeshuman_ai_collab org_design GeneralizabilityNo empirical validation — applicability to real teams and settings is untested, Focus on chat-based collaborative settings may not generalize to other interfaces (code editors, design tools, email), Impact likely depends on task type (creative vs routine) and industry/regulatory context, Cultural and legal norms around authorship and accountability may limit transferability across regions, Designed instrument targets small-group coordination; results may differ at firm or market scales

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
As generative AI becomes an ambient presence in collaborative work, a new social ambiguity emerges around authorship and responsibility. Organizational Efficiency negative high authorship uncertainty / attribution of responsibility
0.02
This condition of authorship uncertainty reshapes how teams attribute ideas, negotiate accountability, and coordinate collective reasoning. Team Performance negative high idea attribution, accountability negotiation, collective reasoning / coordination
0.02
Prior research often treats AI presence as binary, framing it either as a hidden tool or a visible teammate. Research Productivity null_result high research framing of AI presence (binary hidden vs visible)
0.12
What matters in practice is the design of disclosure: how systems reveal, signal, or conceal AI assistance within collaboration. Organizational Efficiency positive high effects of AI disclosure design on collaboration
0.02
We introduce an AI Disclosure Design Space that conceptualizes disclosure as an epistemic coordination mechanism. Organizational Efficiency positive high conceptualization of disclosure as an epistemic coordination mechanism
0.2
The design space articulates four configurations—No AI, Hidden AI, Translucent AI, and Visible AI—each trading off among accountability, autonomy, and coordination cost. Organizational Efficiency mixed high tradeoffs among accountability, autonomy, coordination cost under different disclosure configurations
0.02
We contribute a research instrument that operationalizes these configurations in a collaborative chat setting and articulate testable design conjectures. Research Productivity positive high operationalization of disclosure configurations in a collaborative chat research instrument
0.2
By framing disclosure as epistemic infrastructure, this work outlines a conceptual roadmap for future empirical and design research on Human–AI collaboration. Research Productivity positive high influence on future empirical and design research agendas
0.02

Notes