Telling customers a chatbot is AI and adopting an empathetic, personalized tone boosts trust and lifts purchase intent among UAE youth. Perceived manipulation cuts conversion, but higher digital literacy blunts that harm, suggesting transparency and explainable personalization are pro-competitive and pro-consumer.
This manuscript examines how young consumers respond to AI chatbots in social commerce by conceptualizing chatbots as informatics-enabled front-line service systems.Building on a unified model that assigns Stimulus-Organism-Response (SOR) as the system structure, the Persuasion Knowledge Model (PKM) as the ethical-cognition mechanism, and Trust Theory as the service-outcome logic, we test how two service-design choices-identity disclosure (transparency) and conversational tone (personalized vs. generic)-shape trust and perceived manipulation, and ultimately purchase intention.Using a 2x2 between-subjects experiment with UAE youth (18-25), standardized chatbot dialogues were generated and pretested using a large-language-model workflow to ensure consistent stimuli; this design enables controlled comparison but does not fully capture the adaptivity of live chatbots.PLS-SEM results show that transparent AI disclosure and empathetic personalization increase trust and reduce perceived manipulation; trust is the dominant mediator linking design cues to purchase intention, while perceived manipulation imposes a significant negative effect.Digital literacy attenuates the negative influence of manipulation on intention, highlighting a boundary condition relevant for service governance.The results can also be used to inform guidelines for the development and delivery of service systems, which involve the provision of transparency by default, personalization that is explainable, the adaptation of tone to the needs of the user, and the development of an escalation process.
Summary
Main Finding
Transparent AI identity disclosure and empathetic/personalized conversational tone both increase trust and reduce perceived manipulation among UAE youth (18–25). Trust is the primary mediator driving higher purchase intention in social commerce, while perceived manipulation exerts a significant negative effect on purchase intention. Higher digital literacy weakens (attenuates) the negative impact of perceived manipulation on purchase intention.
Key Points
- Framing: Chatbots are treated as informatics-enabled front-line service systems; the study integrates SOR (Stimulus–Organism–Response), the Persuasion Knowledge Model (PKM), and Trust Theory to explain user responses.
- Experimental manipulations:
- Identity disclosure: AI-disclosed vs. human-posed.
- Conversational tone: empathetic/personalized vs. generic.
- Core hypotheses (tested): H1–H4 (effects of disclosure and tone on trust and perceived manipulation), H5–H7 (effects of manipulation and trust on purchase intention and mediation), H8 (digital literacy moderates manipulation → intention).
- Empirical results:
- Both transparency (AI disclosure) and empathetic personalization increase trust and reduce perceived manipulation.
- Trust is the dominant pathway from design cues to purchase intention.
- Perceived manipulation has a robust negative effect on purchase intention.
- Digital literacy moderates the manipulation → intention link, weakening the negative effect for more digitally literate youth.
- Practical recommendations (from authors): provide transparency by default, make personalization explainable, adapt tone to user needs, and implement escalation pathways for complex/ethical cases.
- Limitations noted: controlled simulation (standardized, non-adaptive dialogues), purposive non-probability sample of UAE youth (limits external generalizability), and not capturing repeated/live chatbot dynamics.
Data & Methods
- Design: 2×2 between-subjects experiment (identity disclosure × conversational tone).
- Population: Digitally active UAE youth, ages 18–25; recruited purposively from universities and early-career workplaces.
- Stimuli: Standardized chatbot dialogues generated and pretested using a large-language-model workflow to ensure consistent content, length, and structure across conditions.
- Manipulation & realism checks: Pretest (n = 30) for realism; manipulation checks confirmed perception of identity and tone conditions.
- Measures: Trust, perceived manipulation, purchase intention, digital literacy (moderator). All measured on 5‑point Likert scales; Cronbach’s α > 0.80 reported.
- Analysis: Partial Least Squares Structural Equation Modeling (PLS-SEM) using SmartPLS v4.0.9.0. Measurement model checks (Cronbach’s α, composite reliability ≥ .70, AVE ≥ .50, Fornell–Larcker and HTMT ≤ .85). Structural tests via bootstrapping (5,000 resamples) to estimate path coefficients, t-values, CIs for direct/mediation/moderation effects.
- Causal claims: The controlled experimental design supports causal inference about the manipulated cues, but results are bounded by the simulated (non-adaptive) interaction context and sample.
Implications for AI Economics
- Demand effects and monetization:
- Design features (transparency and tone) materially change conversion probability via trust and perceived manipulation. Firms can influence short-run purchase behavior by adjusting disclosure and personalization policies.
- Trust acts as a value-creating asset in AI-mediated commerce; investments in transparent/explainable personalization can raise conversion and retention.
- Consumer welfare and market efficiency:
- Transparent AI reduces information asymmetry and may improve allocative efficiency by aligning expectations; opaque personalization can produce welfare losses via manipulation and lower autonomy.
- Digital literacy is a boundary condition: better-informed consumers are less negatively affected by perceived manipulation, which suggests public investments in digital literacy can change market outcomes and reduce harms.
- Product design and pricing strategies:
- Firms face a trade-off: aggressive, opaque personalization may raise immediate short-run compliance but risks long-term backlash, regulatory scrutiny, and lower lifetime value due to manipulation concerns. Strategically, explainable personalization supports sustainable pricing and brand equity.
- Regulatory and governance implications:
- Evidence supports policy interventions that require disclosure/transparent identification of AI agents and explainability of personalization logic to protect consumers—especially less digitally literate segments.
- Governance frameworks should consider mandatory default transparency, limits on covert persuasion, and obligations for escalation/ human oversight for ethically sensitive interactions.
- Empirical modeling and forecasting:
- Macro- and microeconomic models of platform revenue and consumer surplus should include trust as an intermediary variable and perceived manipulation as a negative shock to demand. Segment models should incorporate digital literacy as a moderator influencing elasticity of demand with respect to personalization intensity.
- Research and firm strategy agenda:
- Test live, adaptive chatbot interactions and repeated-use dynamics to estimate long-run effects (retention, churn, reputation).
- Cross-cultural comparisons to quantify how social norms alter the trust/manipulation trade-off and to tailor platform strategies across regions.
- Incorporate measurement of lifetime value changes and churn rates when designing personalization transparency policies.
If you want, I can produce a one-page visual summary for stakeholders (e.g., product managers or regulators) highlighting actionable design rules and estimated directional effects on conversion and consumer welfare.
Assessment
Claims (13)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Transparent AI identity disclosure increases trust among young consumers (UAE, ages 18–25). Ai Safety And Ethics | positive | medium | trust |
0.36
|
| An empathetic, personalized conversational tone in chatbots increases trust among young consumers (UAE, ages 18–25). Ai Safety And Ethics | positive | medium | trust |
0.36
|
| Transparent AI identity disclosure reduces perceived manipulation among young consumers (UAE, ages 18–25). Ai Safety And Ethics | negative | medium | perceived manipulation |
0.36
|
| Empathetic, personalized conversational tone reduces perceived manipulation among young consumers (UAE, ages 18–25). Ai Safety And Ethics | negative | medium | perceived manipulation |
0.36
|
| Trust is the primary (dominant) mediator through which transparency and empathetic personalization increase purchase intention. Firm Revenue | positive | medium | purchase intention (mediated by trust) |
0.36
|
| Perceived manipulation exerts a significant negative (direct) effect on purchase intention. Firm Revenue | negative | medium | purchase intention |
0.36
|
| Higher digital literacy weakens (attenuates) the negative link from perceived manipulation to purchase intention. Firm Revenue | positive | medium | purchase intention (moderated by digital literacy) |
0.36
|
| Stimuli (chatbot dialogues) were standardized and pretested using a large-language-model (LLM) workflow to ensure consistent experimental stimuli across conditions. Research Productivity | null_result | high | stimuli standardization / experimental control |
0.6
|
| The study employed a 2 × 2 between-subjects experimental design manipulating (1) identity disclosure (transparent vs. nondisclosed) and (2) conversational tone (empathetic/personalized vs. generic). Research Productivity | null_result | high | experimental manipulation (design) |
0.6
|
| Partial least squares structural equation modeling (PLS-SEM) was used to test hypothesized direct, mediated, and moderated paths. Research Productivity | null_result | high | analytic method |
0.6
|
| Use of standardized (non-adaptive) dialogues limits ecological validity relative to live adaptive chatbots. Research Productivity | negative | high | ecological validity |
0.6
|
| Design choices that combine transparency and explainable personalization materially increase consumer trust and purchase intention, making them important levers for firms seeking higher conversion in AI-mediated commerce. Firm Revenue | positive | medium | purchase intention / conversion (inferred from trust effects) |
0.36
|
| Policy interventions that encourage or mandate identity disclosure and explainable personalization in commercial chatbots are supported by these findings (to reduce deception risk and perceived manipulation). Governance And Regulation | positive | speculative | policy relevance (consumer protection / perceived manipulation) |
0.06
|