Telling customers a chatbot is AI and adopting an empathetic, personalized tone boosts trust and lifts purchase intent among UAE youth. Perceived manipulation cuts conversion, but higher digital literacy blunts that harm, suggesting transparency and explainable personalization are pro-competitive and pro-consumer.
This manuscript examines how young consumers respond to AI chatbots in social commerce by conceptualizing chatbots as informatics-enabled front-line service systems.Building on a unified model that assigns Stimulus-Organism-Response (SOR) as the system structure, the Persuasion Knowledge Model (PKM) as the ethical-cognition mechanism, and Trust Theory as the service-outcome logic, we test how two service-design choices-identity disclosure (transparency) and conversational tone (personalized vs. generic)-shape trust and perceived manipulation, and ultimately purchase intention.Using a 2x2 between-subjects experiment with UAE youth (18-25), standardized chatbot dialogues were generated and pretested using a large-language-model workflow to ensure consistent stimuli; this design enables controlled comparison but does not fully capture the adaptivity of live chatbots.PLS-SEM results show that transparent AI disclosure and empathetic personalization increase trust and reduce perceived manipulation; trust is the dominant mediator linking design cues to purchase intention, while perceived manipulation imposes a significant negative effect.Digital literacy attenuates the negative influence of manipulation on intention, highlighting a boundary condition relevant for service governance.The results can also be used to inform guidelines for the development and delivery of service systems, which involve the provision of transparency by default, personalization that is explainable, the adaptation of tone to the needs of the user, and the development of an escalation process.
Summary
Main Finding
Transparent AI identity disclosure and empathetic, personalized conversational tone in chatbots increase trust and reduce perceived manipulation among young consumers (UAE, ages 18–25). Trust is the primary pathway through which these design choices raise purchase intention, while perceived manipulation exerts a significant negative effect; higher digital literacy weakens the negative link from perceived manipulation to purchase intention.
Key Points
- Conceptual framework: Stimulus-Organism-Response (SOR) structures the system; Persuasion Knowledge Model (PKM) explains ethical-cognitive responses; Trust Theory links service cues to outcomes.
- Experimental design: 2 × 2 between-subjects manipulation of (1) identity disclosure (AI transparent vs. not) and (2) conversational tone (empathetic personalized vs. generic).
- Stimuli preparation: Standardized chatbot dialogues generated and pretested using a large-language-model workflow to ensure consistent experimental stimuli.
- Analysis: Partial least squares structural equation modeling (PLS-SEM) used to test hypothesized paths.
- Results:
- AI transparency and empathetic personalization both increase trust and reduce perceived manipulation.
- Trust is the dominant mediator between design cues and purchase intention.
- Perceived manipulation reduces purchase intention independently.
- Digital literacy attenuates the negative effect of perceived manipulation on purchase intention.
- Limitations: Use of standardized (non-adaptive) dialogues limits ecological validity versus live adaptive chatbots.
Data & Methods
- Sample: Young consumers in the UAE, ages 18–25 (experimental, between-subjects).
- Experimental factors: 2 (identity disclosure: transparent vs. nondisclosed/implicit) × 2 (conversational tone: personalized/empathetic vs. generic).
- Stimuli creation: Chatbot dialogues produced and pretested with an LLM-based workflow to maintain consistent messages across conditions.
- Measures: Trust, perceived manipulation, purchase intention; digital literacy included as a moderator (measured and modeled).
- Statistical approach: PLS-SEM to estimate direct and mediated effects and interaction/moderation effects.
Implications for AI Economics
- Demand and conversion: Design choices (transparency + explainable personalization) materially affect consumer trust and purchase intention—important levers for firms seeking higher conversion in AI-mediated commerce.
- Consumer welfare and information asymmetry: Transparency reduces deception risk and perceived manipulation, improving informed choice and potentially increasing consumer surplus; lack of transparency can create negative externalities (erosion of trust across platforms).
- Market design & competition: Firms that adopt transparency-by-default and explainable personalization can obtain trust-based competitive advantage; however, personalization that is opaque may yield short-term gains but risks regulatory and reputation costs.
- Pricing & segmentation: Trust-building design choices may justify price premiums or higher willingness-to-pay for services perceived as trustworthy; digital literacy moderates behavioral responses, suggesting heterogeneous demand across consumer segments.
- Regulation & governance: Findings support policy interventions that encourage or mandate identity disclosure and explainable personalization in commercial chatbots; also point to value in digital literacy initiatives as a policy lever to reduce vulnerability to manipulation.
- Platform policy: Service governance guidelines should include transparency-by-default, explainable personalization, adaptive tone matching user need, and clear escalation/hand-off mechanisms to human agents to preserve trust and limit manipulation.
- Research & evaluation: Economic evaluations of AI-driven service systems should account for trust as a mediator of market outcomes, manipulation costs, and heterogeneity from user digital literacy; measurement of actual purchases and long-term effects remains necessary to quantify welfare and market-level impacts.
Suggested next empirical steps for AI-economics work: replicate with adaptive/live chatbots and diverse demographics, link experimental trust/manipulation effects to revealed purchase behavior and pricing outcomes, and model firm incentives and regulatory equilibria given trade-offs between personalization, transparency, and competitive dynamics.
Assessment
Claims (13)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Transparent AI identity disclosure increases trust among young consumers (UAE, ages 18–25). Ai Safety And Ethics | positive | medium | trust |
0.36
|
| An empathetic, personalized conversational tone in chatbots increases trust among young consumers (UAE, ages 18–25). Ai Safety And Ethics | positive | medium | trust |
0.36
|
| Transparent AI identity disclosure reduces perceived manipulation among young consumers (UAE, ages 18–25). Ai Safety And Ethics | negative | medium | perceived manipulation |
0.36
|
| Empathetic, personalized conversational tone reduces perceived manipulation among young consumers (UAE, ages 18–25). Ai Safety And Ethics | negative | medium | perceived manipulation |
0.36
|
| Trust is the primary (dominant) mediator through which transparency and empathetic personalization increase purchase intention. Firm Revenue | positive | medium | purchase intention (mediated by trust) |
0.36
|
| Perceived manipulation exerts a significant negative (direct) effect on purchase intention. Firm Revenue | negative | medium | purchase intention |
0.36
|
| Higher digital literacy weakens (attenuates) the negative link from perceived manipulation to purchase intention. Firm Revenue | positive | medium | purchase intention (moderated by digital literacy) |
0.36
|
| Stimuli (chatbot dialogues) were standardized and pretested using a large-language-model (LLM) workflow to ensure consistent experimental stimuli across conditions. Research Productivity | null_result | high | stimuli standardization / experimental control |
0.6
|
| The study employed a 2 × 2 between-subjects experimental design manipulating (1) identity disclosure (transparent vs. nondisclosed) and (2) conversational tone (empathetic/personalized vs. generic). Research Productivity | null_result | high | experimental manipulation (design) |
0.6
|
| Partial least squares structural equation modeling (PLS-SEM) was used to test hypothesized direct, mediated, and moderated paths. Research Productivity | null_result | high | analytic method |
0.6
|
| Use of standardized (non-adaptive) dialogues limits ecological validity relative to live adaptive chatbots. Research Productivity | negative | high | ecological validity |
0.6
|
| Design choices that combine transparency and explainable personalization materially increase consumer trust and purchase intention, making them important levers for firms seeking higher conversion in AI-mediated commerce. Firm Revenue | positive | medium | purchase intention / conversion (inferred from trust effects) |
0.36
|
| Policy interventions that encourage or mandate identity disclosure and explainable personalization in commercial chatbots are supported by these findings (to reduce deception risk and perceived manipulation). Governance And Regulation | positive | speculative | policy relevance (consumer protection / perceived manipulation) |
0.06
|