Making an AI transparent doesn't automatically win more data: transparency raises stated willingness to share only for users who already trust AI, while immediate sharing decisions remain high and unchanged across human, white-box, and black-box scenarios.
Background: Transparency initiatives in AI systems aim to encourage data-sharing, yet sharing behaviors frequently diverge from stated intentions. This intention–behavior gap reflects dual-process dynamics: intuitive (System 1) responses guide actual sharing behavior, while deliberative (System 2) reasoning governs willingness to share. Whether transparency influences deliberative preferences without affecting immediate behavior – and how trust moderates these effects – remains unclear. This study investigates how transparency, trust, and the processing entity type (human vs. AI) differentially influence deliberative versus immediate sharing decisions, addressing a gap in understanding dual-process dynamics in AI contexts. Method: To isolate these effects, we conducted a pre-registered online experiment (N=240) where participants interacted with a fictional sleep-optimization app. They were randomly assigned to scenarios where data was processed by either a human expert, a transparent white-box AI, or an opaque black-box AI. This design allowed testing the impact of entity type and transparency on actual data-sharing and willingness to share, while measuring the moderating roles of trust and privacy concerns. Results: Counter to common assumptions, AI transparency alone did not significantly increase data-sharing. Its positive effect on willingness to share was contingent on pre-existing user trust in AI, particularly for white-box systems. This suggests trust enables transparency’s benefits. Moreover, actual sharing often contradicted willingness to share (the privacy paradox), with consistently high sharing rates across all conditions indicating that immediate decisions were largely driven by intuitive System 1 processing rather than deliberative evaluation. Conclusion: This research challenges the direct benefits attributed to AI transparency in promoting data-sharing, revealing its effectiveness is amplified by, and dependent upon, user trust. It extends privacy and dual-process theories by showing intuitive System 1 processing can dominate AI data-sharing contexts, overriding stated concerns. Practically, fostering trust in AI may be a more vital prerequisite for data-sharing than implementing transparent designs.
Summary
Main Finding
Transparency in AI (white-box explanations) does not by itself increase actual data-sharing behavior. Its positive effect on stated willingness to share appears only for users who already trust AI. Immediate sharing decisions are largely driven by intuitive (System 1) processes, producing a privacy paradox: high actual sharing across conditions despite lower deliberative willingness to share.
Key Points
- Experimental design: pre-registered online experiment (N = 240) with random assignment to three processing-entity conditions — human expert, transparent white-box AI, or opaque black-box AI.
- Two outcome types:
- Immediate/actual data-sharing (behavioral choice).
- Willingness to share (deliberative, stated preference).
- Main empirical results:
- No significant increase in actual sharing from AI transparency alone.
- Transparency (white-box) increased willingness to share only among participants with higher pre-existing trust in AI.
- High and similar actual sharing rates across all three conditions, indicating a privacy paradox and dominance of System 1 (intuitive) processing in on-the-spot sharing decisions.
- Moderation and mechanisms:
- Trust in AI moderates the effect of transparency on deliberative willingness; transparency is effective primarily when trust exists.
- Privacy concerns did not reliably predict immediate sharing behavior, consistent with dual-process accounts where System 1 overrides deliberative privacy evaluation.
- Practical takeaway: transparency without concurrent trust-building is unlikely to move actual data acquisition outcomes.
Data & Methods
- Sample: N = 240 participants recruited online (pre-registered study).
- Context: interaction with a fictional sleep-optimization app; participants told data would be processed by either:
- a human expert,
- a transparent (white-box) AI with explainable design,
- or an opaque (black-box) AI.
- Randomization ensured balanced assignment to the three conditions.
- Measures:
- Behavioral measure: immediate decision to share requested personal sleep-related data (yes/no).
- Stated measure: willingness to share (deliberative response).
- Moderators: pre-existing trust in AI and privacy concern scales.
- Analysis focused on main effects of entity/transparency on both outcomes, and interactions between transparency and trust/privacy.
- Pre-registration and randomized design strengthen internal validity; single-domain, online setting limits external generalizability.
Implications for AI Economics
- Cost–benefit of transparency investments:
- Firms investing in explainability/transparency should not expect automatic increases in obtained data. The ROI of transparency on data acquisition depends on user trust levels.
- For many products, direct trust-building (reputation, third-party certification, data governance commitments) may be a higher-value investment than transparency tooling alone.
- Market strategy and segmentation:
- Platforms can segment users by trust profiles: for high-trust segments, transparency features can increase stated willingness and potentially long-term engagement; for low-trust segments, different incentives or trust-building interventions are necessary.
- Bundling transparency with trust signals (brand reputation, guarantees, certifications) could be more effective than transparency in isolation.
- Implications for data markets and business models:
- High baseline rates of actual sharing despite stated concerns suggest firms can acquire data even when consumers express reservations; however, reliance on intuitive acceptance raises ethical and regulatory flags.
- If regulation mandates transparency to protect consumers, policymakers should recognize transparency alone may not change behavior and should pair mandates with enforceable trust/audit mechanisms.
- Welfare and regulatory considerations:
- The privacy paradox implies that revealed preferences (actual behavior) may not reflect true consumer welfare. Policy should account for System 1 dynamics and consider nudges, required disclosures, or default protections rather than assuming transparency suffices.
- Measurement and evaluation practices:
- Researchers and firms should measure both actual behavior and stated willingness; relying solely on surveys can misestimate economic responses to design or policy changes.
- Longitudinal studies are needed: transparency effects on repeated interactions, retention, and downstream monetization may differ from one-off choices.
- Directions for future research important to AI economics:
- Test generalizability across domains (health vs. finance vs. social apps) and higher-stakes data.
- Investigate which trust-building mechanisms (reputation, guarantees, audits, insurance) most effectively unlock transparency’s potential.
- Explore dynamic effects: does transparency increase long-run engagement or only affect deliberative attitudes?
- Model equilibrium effects in data markets when some firms invest in trust versus transparency, and implications for competition and welfare.
Limitations to keep in mind: single fictional sleep-app context, online convenience sample, and short-term interaction—so effects may differ in higher-stakes, repeated-use, or offline settings.
Assessment
Claims (6)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| We conducted a pre-registered online experiment (N=240) where participants interacted with a fictional sleep-optimization app and were randomly assigned to scenarios where data was processed by either a human expert, a transparent white-box AI, or an opaque black-box AI. Adoption Rate | positive | high | experimental manipulation / treatment assignment and measurement of sharing outcomes |
n=240
1.0
|
| AI transparency alone did not significantly increase data-sharing. Adoption Rate | null_result | high | actual data-sharing (behavioral sharing decisions) |
n=240
0.6
|
| The positive effect of transparency on willingness to share was contingent on pre-existing user trust in AI, particularly for white-box systems. Adoption Rate | positive | high | willingness to share (stated/deliberative sharing intention) |
n=240
0.6
|
| Actual sharing often contradicted willingness to share (the privacy paradox), with consistently high sharing rates across all conditions. Adoption Rate | mixed | high | discrepancy between stated willingness to share vs actual sharing behavior |
n=240
0.6
|
| Immediate sharing decisions were largely driven by intuitive System 1 processing rather than deliberative evaluation (System 2). Adoption Rate | positive | high | dominance of intuitive (System 1) processing in immediate sharing behavior |
n=240
0.6
|
| Transparency’s effectiveness in promoting data-sharing is amplified by, and dependent upon, user trust; fostering trust in AI may be a more vital prerequisite for data-sharing than implementing transparent designs. Adoption Rate | positive | high | recommendation/policy implication regarding trust vs transparency for promoting data-sharing |
n=240
0.1
|