AI monitoring and predictive models can flag risky online gambling behaviour and enable tailored interventions, but evidence is mostly retrospective and short-term; without randomized evaluations, transparency, and stronger data governance the systems remain promising prototypes rather than proven public‑policy tools.
Deep technologies combine engineering innovation and scientific findings to solve complex problems and are becoming particularly relevant to the gambling industry. With the global rise of gambling practices and the subsequent increase of gambling-related problems and disorders, deep technologies have emerged as a way to create safer online gambling environments. However, there is still limited knowledge regarding their applicability and consequences. The present study systematically reviewed the existing literature on deep technologies in gambling environments, such as online casinos and betting platforms, and explored their potential benefits, risks, and effectiveness in promoting safer gambling experiences. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Searches were conducted in Web of Science, PubMed, Scopus, EBSCO, and IEEE databases, and manually. A total of sixty-eight studies were included in the review. In general, four primary applications of deep technologies in online settings were found: (i) behavioural monitoring and feedback; (ii) predictive risk modelling; (iii) decision support and AI classifiers; and (iv) limit-setting/self-exclusion tools. They were primarily used to identify and classify problematic gambling, prompt individual action, regulate gambling behaviours, raise awareness of risk levels, promote responsible gambling practices, support research, interventions, and evaluate player protection initiatives. Together, the findings suggest that deep technologies offer ample opportunities to enhance gambler safety and reduce potential risks, although challenges may arise from their implementation, such as privacy and ethical concerns, malicious data use, misclassification of risk levels, and difficulties in large-scale application. Limitations and directions for future studies are discussed.
Summary
Main Finding
Deep technologies (AI, machine learning, predictive analytics, automated monitoring) show clear potential to detect, prevent, and mitigate gambling-related harm in online environments—via behavioural monitoring, predictive risk models, AI classifiers/decision support, personalized messaging, and limit/self‑exclusion tools. However, empirical evidence is heterogeneous and partial: technologies can improve early detection and enable tailored interventions, but important implementation, ethical, privacy, accuracy, and incentive problems remain. Large-scale effectiveness, long‑term outcomes, and alignment with public-health goals are still under-evaluated.
Key Points
-
Scope and scale
- Systematic review of 68 peer‑reviewed empirical studies (published ≥2015).
- Data from ~1,594,074 individuals, predominantly European samples (50% of studies).
- Review registered in PROSPERO (CRD1049386) and followed PRISMA guidelines; searches Dec 10–17, 2024 plus manual searches into 2025.
-
Conceptual framing
- Operational definition: “deep tech” = computational approaches combining algorithmic modelling and/or intelligent automation embedded in gambling platforms (real‑time/automated systems). Standard descriptive stats included only when part of automated systems.
-
Primary application categories (with study counts)
- Predictive risk modelling (n = 26)
- Decision support / AI classifiers (n = 14)
- Behavioural monitoring (n = 8)
- Personalized messaging / alerts (n = 10)
- Restriction / self‑regulation tools (limit‑setting, self‑exclusion) (n = 11)
-
Evidence quality and findings
- Methodological quality scores (QuADS) averaged high (mean 34.38 / 39).
- Studies show promising short‑term detection and engagement effects (e.g., flagging risky behaviour, prompting self‑limits), but long‑term reductions in harm and population‑level impact are inconclusive.
- Common benefits: earlier risk identification, personalized interventions, scalability of monitoring, richer behavioural insights for researchers and regulators.
- Common risks: privacy breaches, ethical concerns, potential for malicious use of data, model misclassification (false positives/negatives), low uptake of voluntary tools, industry incentives to prioritize engagement/revenue over safety, and practical scaling/implementation challenges.
-
Gaps identified
- Few randomized controlled trials or long-term follow-ups.
- Sparse cost‑effectiveness analyses and limited cross‑jurisdictional evaluations.
- Lack of standardized benchmarks for model validity, transparency, and auditing.
Data & Methods
-
Search and selection
- Databases: PubMed, Web of Science, Scopus, EBSCO, IEEE (via b‑on); manual searches and reference-list checks.
- Search window: studies published from January 2015 onward; languages: English, French, Italian, Portuguese.
- Initial hits: 1,229 records; after screening and manual search, 68 empirical studies included.
-
Inclusion / exclusion highlights
- Included: empirical studies using behavioural/log data and advanced computational or automated intervention systems for adults (≥18).
- Excluded: reviews, meta‑analyses, case studies, non‑empirical pieces, clinical interventions without a technological interface, purely descriptive analyses not embedded in automated systems.
-
Data extraction & appraisal
- Extracted: tech type, study/sample design, data characteristics, analyses, outcomes, opportunities/risks.
- Risk of bias: Quality Appraisal for Diverse Studies (QuADS); high inter‑rater agreement reported.
-
Study populations & measures
- Most studies analyzed operator log‑data (sessions, bets, frequencies); some experimental designs compared groups (e.g., self‑excluders vs. non‑problematic gamblers).
- Participant characteristics: majority male (≈3:1), mean age ≈40 years.
Implications for AI Economics
-
Incentives and market structure
- Misaligned incentives: commercial operators have revenue motives that may conflict with deploying protective deep tech. Without regulatory constraints or reputational pressures, firms may favor personalization that increases engagement over safety.
- Data advantages concentrate with incumbents: large platforms that collect extensive behavioral logs can build more accurate models, potentially raising entry barriers and creating strategic asymmetries.
-
Externalities and welfare
- Problem gambling generates social costs (financial harm, health services, family effects). Effective deep tech could internalize some externalities by reducing harm, increasing social welfare; but harms from misclassification or data misuse could create negative welfare effects.
- False positives (unnecessary restrictions) and false negatives (missed harms) have asymmetric costs. Economists should treat model error as an economic friction—designing policy to balance Type I/II error externalities.
-
Regulation and governance
- Need for policy: mandatory safeguards, transparency requirements, independent auditing, and standardized performance metrics for models (e.g., precision/recall by risk strata, calibration, fairness checks).
- Data governance: rules on data sharing for research vs. commercial exploitation; regulatory frameworks should specify permitted data uses, retention, and protections to prevent malicious targeting or discriminatory practices.
- Market interventions: regulators might require operator‑level harm‑reduction benchmarks, subsidize neutral third‑party monitoring, or mandate minimum protection features.
-
Cost-effectiveness and public policy design
- Economic evaluations are missing but necessary: cost of model development/deployment vs. benefits (reduced harms, lower treatment costs, preserved consumer surplus).
- Policy levers could include conditional licensing (safety tech requirements), mandated transparent A/B testing (RCTs) for protective measures, or taxes/levies tied to harm metrics to internalize externalities.
-
Information asymmetries & credibility
- Consumers lack visibility into operator algorithms and protections; credible signaling (independent certification, published audits) could shift consumer choice and competitive dynamics toward safer operators.
- Regulators and researchers need access to operator data under controlled conditions to validate tools and reduce asymmetric information about real harms.
-
Research and evaluation agenda (economic priorities)
- Run RCTs and quasi‑experimental evaluations comparing different tech interventions with economic outcome measures (healthcare costs, income shocks, productivity losses).
- Develop standardized economic metrics: willingness‑to‑pay to avoid gambling harm, social cost per problem gambler, and benefit–cost ratios for deep‑tech interventions.
- Study behavioral responses to protective tech (e.g., displacement effects across platforms, substitution to unregulated venues).
- Model market responses: how requiring safety tech affects entry, innovation, pricing, and consumer surplus.
Practical recommendations for policymakers and researchers - Require independent audits, transparent performance reporting, and public benchmarking for models used to flag risky gambling. - Mandate data‑access protocols for accredited researchers to enable reproducible impact evaluation while protecting privacy. - Incentivize operators (via regulation or subsidies) to prioritize and publicly validate protective tools—avoid reliance on voluntary, industry‑led self‑regulation alone. - Commission cost‑effectiveness studies and long‑term RCTs to move from proof‑of‑concept to validated policy instruments.
Summary statement Deep tech offers economically meaningful tools to reduce gambling harm, but realizing welfare gains depends on overcoming incentive misalignment, ensuring robust governance, validating effectiveness with rigorous evaluations, and integrating protections into regulatory frameworks that internalize the social costs of problem gambling.
Assessment
Claims (21)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| The review included 68 empirical and methodological studies on deep technologies in online gambling. Other | null_result | high | number of included studies (study count = 68) |
n=68
0.12
|
| Searches were performed in Web of Science, PubMed, Scopus, EBSCO and IEEE, plus manual searches, following PRISMA guidelines. Other | null_result | high | search strategy / databases searched (qualitative) |
0.12
|
| Deep technologies (machine learning, AI-driven monitoring, engineering–science integrations) are increasingly applied in online casinos, sportsbooks and related platforms. Adoption Rate | positive | medium | presence and frequency of ML/AI applications in platform contexts (qualitative / count of studies) |
0.07
|
| Four primary application areas were identified: (1) behavioural monitoring and feedback, (2) predictive risk modelling, (3) decision support and AI classifiers, and (4) limit‑setting and self‑exclusion tools. Other | null_result | high | application area classification (categorical counts / thematic presence) |
0.12
|
| Behavioural monitoring and feedback systems enable real‑time tracking of play patterns and provision of tailored nudges or warnings. Other | positive | medium | ability to track behaviour in real time and deliver tailored feedback (system functionality; implementation examples) |
0.07
|
| Predictive risk‑modelling algorithms can estimate individual risk of problematic gambling using behavioural data. Output Quality | positive | medium | predictive performance for classifying risk (AUC, precision, recall, classification accuracy) |
0.07
|
| Decision‑support and AI classifiers can automatically classify player states (e.g., risk levels) to trigger interventions or inform staff/research. Decision Quality | positive | medium | classification of player state (risk level) and triggering of interventions (classification metrics, system outputs) |
0.07
|
| Limit‑setting and self‑exclusion tools informed by algorithms have been prototyped or implemented to provide algorithmically informed limits, reminders, and automated self‑exclusion pathways. Other | positive | medium | existence and functioning of algorithmic limit/self‑exclusion tools (prototype implementation; user interactions) |
0.07
|
| Reported benefits include improved detection of high‑risk behaviour patterns beyond self‑report. Output Quality | positive | medium | detection/classification accuracy of high‑risk behaviour relative to self‑report (AUC, precision/recall comparisons) |
0.07
|
| There is potential for timely, personalized interventions (nudges/warnings) that could reduce harm, but causal evidence of long‑term effectiveness is limited. Consumer Welfare | mixed | medium | intervention uptake and short‑term behavioural change (pilot outcomes) versus long‑term harm reduction (largely unmeasured) |
0.07
|
| Evaluation approaches in the reviewed literature varied widely, with many studies using retrospective accuracy metrics (AUC, precision/recall) rather than causal impact measures on harm reduction. Other | null_result | high | type of evaluation used (retrospective predictive metrics vs causal designs) |
0.12
|
| Typical data used in studies are platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, and in some cases limited self‑report measures. Other | null_result | high | data types employed in models (behavioral log variables, account metadata, self‑report presence) |
0.12
|
| Privacy and ethical concerns are substantial: continuous monitoring and sensitive behavioural inference raise privacy, surveillance, and misuse risks. Ai Safety And Ethics | negative | high | privacy/ethical risk (qualitative concerns reported across studies) |
0.12
|
| Misclassification risks (false positives and false negatives) are a common limitation and can harm consumers by incorrectly restricting access or by failing to detect harm. Error Rate | negative | high | model error rates and downstream consumer harm risk (false positive/negative impacts discussed) |
0.12
|
| Heterogeneous study designs, outcomes, and measures across the literature hinder quantitative meta‑analysis and synthesis of effectiveness. Other | null_result | high | heterogeneity of study designs and outcome measures (qualitative / count of disparate metrics) |
0.12
|
| There is limited reporting on privacy safeguards, model interpretability, and external validity in the reviewed studies. Ai Safety And Ethics | negative | high | frequency/extent of reporting on privacy safeguards and interpretability (qualitative reporting rate) |
0.12
|
| Research gaps include the need for robust causal evaluations (RCTs, field experiments), standardized metrics, transparency/interpretability, fairness analysis, and cross‑jurisdictional studies. Research Productivity | null_result | high | presence of causal evaluations, standardized metrics, transparency and fairness analyses (qualitative absence) |
0.12
|
| If platforms successfully deploy effective deep technologies, they may gain competitive advantages (improved retention, regulatory compliance, reduced liability), potentially raising barriers to entry and increasing returns to scale for incumbents with large behavioural datasets. Market Structure | positive | medium | competitive advantage and market concentration effects (theoretical/economic inference; empirical evidence limited) |
0.07
|
| Platforms with larger behavioural datasets can build more accurate risk models, making data a strategic asset and potentially concentrating market power. Market Structure | positive | medium | model accuracy as a function of dataset size and implications for market power (theoretical / inferred) |
0.07
|
| There is a risk of regulatory arbitrage and spillovers: better detection on regulated platforms could drive problem gamblers to unregulated venues. Governance And Regulation | negative | low | displacement of problem gambling to unregulated venues (speculative; not measured) |
0.04
|
| Operators and regulators should prioritize independent model audits, disclosure of data use, fairness/error rates, and field experiments to quantify causal impacts and heterogeneous effects. Governance And Regulation | null_result | high | policy/research actions recommended (qualitative) |
0.12
|