Creators hide their use of generative AI to preserve authenticity, turning 'efficiency' gains into downstream repair and performance work; those with more education, money, or team support are better able to pass, widening trust-based inequality.
Generative AI is increasingly embedded in China's short-video production, yet on authenticity-oriented platforms its visible use can be discrediting. Drawing on 16 in-depth interviews with creators active on Xiaohongshu and Douyin, this article introduces AI passing: creators' strategic efforts to conceal and humanize AI-assisted drafts so that outputs plausibly appear human-authored. We show that creators associate legible AI assistance with intertwined trust vulnerabilities, including epistemic unreliability, anticipated relational penalties, and platform authenticity regimes. To manage this legibility, creators perform four recurring forms of invisible authenticity labor: epistemic verification, linguistic naturalization, narrative restructuring, and performative embodiment. These practices reallocate work from generation to downstream repair and performance, complicating claims that AI simply improves efficiency. Finally, we demonstrate that passing capacity is stratified by educational and professional capital, economic resources and team support, and platform position, producing trust-based inequality in who can leverage AI while sustaining credibility, voice, and liveness.
Summary
Main Finding
Generative AI in China’s short-video creator economy often increases, rather than reduces, creators’ invisible labor because visible AI use can threaten perceived authenticity. Creators therefore engage in "AI passing"—strategic work to conceal and humanize AI-assisted drafts—performing four types of authenticity labor (epistemic verification, linguistic naturalization, narrative restructuring, performative embodiment). Capacity to pass is uneven and correlated with education/professional capital, economic/team resources, and platform position, producing trust-based inequality in who can exploit AI while preserving credibility, voice, and liveness.
(Reference: Su et al., 2026. Front. Psychol. 17:1800866. DOI 10.3389/fpsyg.2026.1800866.)
Key Points
- AI passing: defined as creators’ deliberate efforts to remove recognizable “AI traces” so content plausibly passes as human-authored; treated as a situated risk-management practice informed by platform authenticity norms.
- Trust vulnerabilities that make visible AI risky:
- Epistemic unreliability (audiences infer low accuracy/knowledge if AI is visible).
- Anticipated relational penalties (signals of low effort, sincerity, or deception).
- Platform authenticity regimes (platform cultures and governance reward perceived “genuine” experience).
- Four recurring forms of invisible authenticity labor:
- Epistemic verification — fact‑checking and grounding AI drafts in lived detail or domain knowledge.
- Linguistic naturalization — rewriting to match personal voice, idioms, and habitual phrasing.
- Narrative restructuring — reorganizing AI-generated structure into plausible experiential arcs or dramaturgy.
- Performative embodiment — rehearsing or staging delivery, deliberate small “mistakes,” and embodied cues to signal spontaneity.
- Labor relocation: generative AI shifts effort from drafting to downstream repair, curatorial judgment, and performance-intensive tasks; thus automation can create new, skill‑intensive invisible work.
- Stratification and inequality: creators with higher education/professional capital, disposable income, team support, or favorable platform position can perform passing more effectively, capturing AI productivity gains without suffering reputational costs. Less-resourced creators face trade-offs (use AI and risk credibility or avoid AI and incur time costs).
- Disclosure is politically and practically ambiguous: transparency can backfire by making tool limits salient; creators strategically choose non-disclosure plus passing practices.
Data & Methods
- Design: Qualitative, semi-structured interview study aimed at mechanism-building.
- Sample: 16 short-video creators in China (June–September 2025). Criteria: active on major short-video platforms, routine use of generative AI for text tasks (scripts, captions), and presence in persona/voice‑centered niches.
- Platforms: Primarily Xiaohongshu (10/16) and Douyin (5/16); also Kuaishou, Bilibili, WeChat.
- Demographics: Ages 18–44; 13 women, 3 men; follower counts from hundreds to ~306k. Occupations ranged from students to media professionals and teachers; education levels included bachelor’s, master’s, PhD.
- Data collection: Mandarin interviews, audio-recorded; incident-based prompts asking participants to walk through recent AI-assisted posts and revisions (what was generated, what was revised, what felt “AI-like,” motives for disclosure/non-disclosure).
- Analysis: Thematic coding to identify recurring practices and motivations; emphasis on situated descriptions of workflow and trade-offs.
- Limitations noted by authors: small, purposive China-specific sample; qualitative approach limits generalizability but yields detailed mechanisms.
Implications for AI Economics
- Productivity claims and automation accounting:
- Standard measures of AI-driven productivity risk overestimating net gains when they ignore downstream repair and performance labor. Economic models should distinguish gross drafting speedups from net time/cost after verification, rewriting, and rehearsing.
- Invisible labor time is economically consequential; firm- and platform-level productivity metrics should incorporate these hidden inputs to avoid biased estimates.
- Distributional effects and labor market inequality:
- Returns to AI will be uneven: creators with complementary human capital (education, domain expertise, performance skill), capital (teams, paid editors), and platform prominence capture disproportionate benefits. This may widen income and attention inequality within creator economies.
- AI complements certain types of human skill (judgment, voice-craft, embodied performance), so demand and wage premia for those skills may rise.
- Platform competition and market structure:
- Platforms that reward perceived authenticity will create incentives for passing and may implicitly favor creators capable of passing; platform algorithmic design and governance choices (e.g., penalties for visible AI, disclosure policies) will shape equilibrium adoption patterns and entry barriers.
- Platforms’ opaque curation amplifies strategic uncertainty, increasing the value of reputational capital and team resources—potentially entrenching incumbent creators.
- Policy and governance considerations:
- Disclosure rules & transparency: blanket disclosure requirements may have ambiguous effects and could penalize creators without compensating interpretive frameworks; policymakers should consider nuanced disclosure modalities (contextual, functionality‑focused) and evaluate impacts empirically before mandating.
- Labor protections & support: as AI reallocates work toward less visible tasks, labor statistics and regulatory frameworks should recognize and measure invisible authenticity labor to inform social protections, training programs, and support for precarious creators.
- Research and measurement priorities:
- Quantify invisible labor: develop time-use studies and platform telemetry to measure time spent on verification, rewriting, and performance relative to AI drafting.
- Causal impact: assess how AI adoption affects revenues, engagement, and reputational risk across creator strata (e.g., randomized encouragement or instrumented adoption studies).
- Platform policy experiments: test how different disclosure formats, labeling regimes, or curation signals affect creator behavior, audience trust, and inequality.
- Implication for businesses and creators:
- Investments complementing AI (editing teams, domain expertise, training in embodied performance) can be a decisive source of competitive advantage.
- Business models that commodify passing (e.g., agencies offering "AI‑humanization" services) may expand, monetizing the invisible repair tasks the paper documents.
Suggested next steps for AI economics researchers: integrate measures of invisible authenticity labor into models of automation returns; study heterogenous effects of AI on income and entry; evaluate platform policy impacts on welfare and distribution in creator markets.
Assessment
Claims (8)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Generative AI is increasingly embedded in China's short-video production. Adoption Rate | positive | medium | degree of AI adoption in short-video production |
n=16
0.05
|
| On authenticity-oriented platforms, visible use of AI can be discrediting for creators. Consumer Welfare | negative | high | perceived reputational/discrediting effects of visible AI use |
n=16
0.18
|
| Creators engage in 'AI passing': strategic efforts to conceal and humanize AI-assisted drafts so that outputs plausibly appear human-authored. Task Allocation | positive | high | use of concealment/humanization strategies for AI outputs |
n=16
0.18
|
| Creators associate legible AI assistance with intertwined trust vulnerabilities, including epistemic unreliability, anticipated relational penalties, and platform authenticity regimes. Worker Satisfaction | negative | high | perceived trust vulnerabilities tied to visible AI assistance |
n=16
0.18
|
| To manage AI legibility, creators perform four recurring forms of invisible authenticity labor: epistemic verification, linguistic naturalization, narrative restructuring, and performative embodiment. Task Allocation | positive | high | types of labor performed to conceal/humanize AI outputs |
n=16
0.18
|
| These invisible authenticity practices reallocate work from generation to downstream repair and performance, complicating claims that AI simply improves efficiency. Task Allocation | negative | high | shift in locus of work and implications for efficiency |
n=16
0.18
|
| Passing capacity is stratified by educational and professional capital, economic resources and team support, and platform position. Inequality | negative | high | variation in ability to perform 'AI passing' across creators |
n=16
0.18
|
| This stratification produces trust-based inequality in who can leverage AI while sustaining credibility, voice, and liveness. Inequality | negative | high | inequality in access to benefits of AI conditioned on ability to sustain trust/credibility |
n=16
0.18
|