The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

AI monitoring and predictive models can flag risky online gambling behaviour and enable tailored interventions, but evidence is mostly retrospective and short-term; without randomized evaluations, transparency, and stronger data governance the systems remain promising prototypes rather than proven public‑policy tools.

Deep technologies and safer gambling: A systematic review.
L. G. Cardoso, Beatriz C R Barroso, G. Piccoli, Miguel Peixoto, Pedro Morgado, António Marques, Carla Rocha, Mark D. Griffiths, Ricardo Queirós, A. Dores · Fetched March 18, 2026 · Acta Psychologica
semantic_scholar review_meta low evidence 7/10 relevance DOI Source
A PRISMA systematic review finds that deep technologies (ML-based monitoring, predictive risk models, and AI classifiers) can detect risky gambling patterns and enable tailored interventions, but evidence of causal impacts on harm reduction, welfare, and long-term outcomes is limited and privacy, fairness, and scalability risks remain.

Deep technologies combine engineering innovation and scientific findings to solve complex problems and are becoming particularly relevant to the gambling industry. With the global rise of gambling practices and the subsequent increase of gambling-related problems and disorders, deep technologies have emerged as a way to create safer online gambling environments. However, there is still limited knowledge regarding their applicability and consequences. The present study systematically reviewed the existing literature on deep technologies in gambling environments, such as online casinos and betting platforms, and explored their potential benefits, risks, and effectiveness in promoting safer gambling experiences. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Searches were conducted in Web of Science, PubMed, Scopus, EBSCO, and IEEE databases, and manually. A total of sixty-eight studies were included in the review. In general, four primary applications of deep technologies in online settings were found: (i) behavioural monitoring and feedback; (ii) predictive risk modelling; (iii) decision support and AI classifiers; and (iv) limit-setting/self-exclusion tools. They were primarily used to identify and classify problematic gambling, prompt individual action, regulate gambling behaviours, raise awareness of risk levels, promote responsible gambling practices, support research, interventions, and evaluate player protection initiatives. Together, the findings suggest that deep technologies offer ample opportunities to enhance gambler safety and reduce potential risks, although challenges may arise from their implementation, such as privacy and ethical concerns, malicious data use, misclassification of risk levels, and difficulties in large-scale application. Limitations and directions for future studies are discussed.

Summary

Main Finding

Deep technologies (machine learning, AI-driven monitoring, and related engineering–science integrations) are increasingly applied in online gambling environments and show promise for enhancing gambler safety via behaviour monitoring & feedback, predictive risk modelling, decision-support/AI classifiers, and limit-setting/self-exclusion tools. The literature (68 studies, PRISMA-guided review) indicates potential benefits for identifying risky play, prompting safer choices, and supporting interventions, but also highlights substantial privacy, ethical, misclassification, and scalability challenges that limit current applicability and require further evaluation.

Key Points

  • Scope and evidence base
    • Systematic review following PRISMA; 68 included studies from Web of Science, PubMed, Scopus, EBSCO, IEEE and manual searches.
    • Studies collectively examine applications of deep technologies in online casinos, sportsbooks and related platforms.
  • Four primary application areas identified
  • Behavioural monitoring and feedback — real‑time tracking of play patterns and provision of tailored nudges or warnings.
  • Predictive risk modelling — algorithms to estimate individual risk of problematic gambling based on behavioural data.
  • Decision support and AI classifiers — automated classification of player states (e.g., risk levels) to trigger interventions or to inform staff/research.
  • Limit‑setting and self‑exclusion tools — algorithmically informed limits, reminders, and automated pathways to self‑exclusion.
  • Purposes of deployment
    • Identify and classify problematic gambling; prompt individual action; regulate and moderate gambling behaviour; increase awareness of risk; support research and evaluate protection measures.
  • Benefits reported
    • Improved detection of high‑risk behaviour patterns beyond self-report.
    • Potential for timely, personalized interventions that could reduce harm.
    • Support for research through richer behavioural datasets and automated classification.
  • Risks and limitations
    • Privacy and ethical concerns around continuous monitoring and sensitive behavioural inference.
    • Potential for misuse of data (commercial or malicious).
    • Misclassification risks (false positives/negatives) that can harm consumers or miss harm.
    • Challenges scaling systems across jurisdictions, platforms, and diverse player groups.
    • Limited causal evidence on long‑term effectiveness and on welfare outcomes.
  • Research gaps
    • Need for robust evaluations (RCTs, field experiments), standardized metrics, transparency/interpretability, fairness analysis, and cross‑jurisdictional studies.

Data & Methods

  • Review methodology
    • Systematic review conducted using PRISMA guidelines.
    • Searches performed in major bibliographic databases: Web of Science, PubMed, Scopus, EBSCO, and IEEE, plus manual searching.
    • Inclusion resulted in 68 empirical and methodological studies on deep technologies in online gambling.
  • Typical data and methods in the reviewed literature
    • Data: platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, limited self‑report measures in some studies.
    • Methods: supervised and unsupervised machine learning models for classification and prediction, real‑time monitoring systems, algorithmic nudging implementations, and prototype limit/self‑exclusion mechanisms.
    • Evaluation approaches varied widely; many studies use retrospective accuracy metrics (AUC, precision/recall) rather than causal impact on harm reduction.
  • Limitations of the evidence base
    • Heterogeneous study designs and outcome measures hinder meta‑analysis.
    • Scarce randomized or longitudinal evaluations measuring welfare outcomes.
    • Limited reporting on privacy safeguards, model interpretability, and external validity.

Implications for AI Economics

  • Market structure and firm incentives
    • Firms that successfully deploy effective deep technologies may gain a competitive advantage through improved customer retention, regulatory compliance, and reduced liability—raising barriers to entry and increasing returns to scale for incumbents with rich behavioural data.
    • Data becomes a strategic asset; platforms with larger datasets can build more accurate risk models, potentially concentrating market power.
  • Consumer welfare and externalities
    • If effective, technologies can reduce gambling‑related harms and improve consumer welfare; however, misclassification and false alarms can reduce welfare by denying access or stigmatizing customers.
    • There are distributional concerns: vulnerable or low‑income groups may be differentially affected by automated interventions.
    • Spillovers: better detection may shift problem gamblers towards unregulated venues or informal channels (regulatory arbitrage).
  • Information asymmetry and regulation
    • Deep tech changes the information environment—platforms can observe behaviours invisible to regulators and consumers—creating both opportunities for targeted protection and risks of exploitation (surveillance monetization).
    • Regulators may need new standards for transparency, auditing of models, data governance, and minimum efficacy thresholds for player‑protection systems.
  • Measurement, evaluation, and policy design
    • Economists should prioritize causal evaluations (RCTs, phased rollouts, difference‑in‑differences) to estimate effects on gambling spend, incidence of problem behaviour, and welfare.
    • Cost‑effectiveness analysis is needed: evaluate costs of model development/deployment and potential savings from reduced treatment costs and social harms.
    • Design of incentives: consider regulatory incentives (subsidies, penalties, certification) to align firm profit motives with social protection objectives.
  • Risks for market conduct and strategic behavior
    • Players may adapt to circumvent automated limits or manipulate signals; operators may have incentives to trade off safety for revenue unless regulated.
    • Model errors can create liabilities and reputational risks; transparent grievance/appeal processes and human oversight mechanisms should be economically priced into compliance strategies.
  • Research and policy priorities
    • Standardize outcome metrics and reporting for model performance and welfare impacts.
    • Mandate independent audits and disclosure of data use, fairness, and error rates.
    • Encourage field experiments to quantify causal impacts and heterogeneous effects across demographics.
    • Evaluate cross‑jurisdictional regulatory approaches to limit regulatory arbitrage and protect vulnerable populations.

Concluding note: The literature indicates meaningful promise for deep technologies to improve gambler safety, but from an AI economics perspective the key priorities are rigorous causal evaluation, careful incentive design, data governance/regulation, and attention to distributional and market impacts before large‑scale deployment.

Assessment

Paper Typereview_meta Evidence Strengthlow — The underlying literature largely reports retrospective predictive performance (AUC, precision/recall) from observational platform logs and prototype deployments; randomized controlled trials, longitudinal welfare measures, and causal evaluations of harm reduction are scarce, so causal claims about consumer welfare or long‑term effects are weak. Methods Rigorhigh — The paper is a PRISMA-guided systematic review with searches across multiple major databases (Web of Science, PubMed, Scopus, EBSCO, IEEE) and manual searching, yielding 68 included studies; however heterogeneity in study designs limited meta-analysis and formal quantitative synthesis. SampleSystematic review of 68 empirical and methodological studies on deep technologies in online gambling; primary studies mostly use platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, and occasional self-report measures; methods in the literature include supervised/unsupervised ML models, real-time monitoring systems, algorithmic nudges, and prototype limit/self-exclusion tools, with evaluations typically reporting retrospective accuracy metrics rather than causal outcomes. Themesgovernance adoption GeneralizabilityFocus restricted to online gambling platforms; findings may not apply to land‑based gambling or other digital platforms., Many primary studies use single-platform or proprietary datasets, limiting transferability across operators with different player bases and product mixes., Predominance of retrospective/short-term evaluations; limited evidence on long-term behavioural change or welfare outcomes., Heterogeneous methods, outcomes, and population samples across studies complicate broader inference., Regulatory, cultural, and jurisdictional differences likely affect feasibility and acceptability of monitoring/intervention approaches., Prototype and lab-based implementations dominate in places, reducing external validity for real-world scaled deployment.

Claims (21)

ClaimDirectionConfidenceOutcomeDetails
The review included 68 empirical and methodological studies on deep technologies in online gambling. Other null_result high number of included studies (study count = 68)
n=68
0.12
Searches were performed in Web of Science, PubMed, Scopus, EBSCO and IEEE, plus manual searches, following PRISMA guidelines. Other null_result high search strategy / databases searched (qualitative)
0.12
Deep technologies (machine learning, AI-driven monitoring, engineering–science integrations) are increasingly applied in online casinos, sportsbooks and related platforms. Adoption Rate positive medium presence and frequency of ML/AI applications in platform contexts (qualitative / count of studies)
0.07
Four primary application areas were identified: (1) behavioural monitoring and feedback, (2) predictive risk modelling, (3) decision support and AI classifiers, and (4) limit‑setting and self‑exclusion tools. Other null_result high application area classification (categorical counts / thematic presence)
0.12
Behavioural monitoring and feedback systems enable real‑time tracking of play patterns and provision of tailored nudges or warnings. Other positive medium ability to track behaviour in real time and deliver tailored feedback (system functionality; implementation examples)
0.07
Predictive risk‑modelling algorithms can estimate individual risk of problematic gambling using behavioural data. Output Quality positive medium predictive performance for classifying risk (AUC, precision, recall, classification accuracy)
0.07
Decision‑support and AI classifiers can automatically classify player states (e.g., risk levels) to trigger interventions or inform staff/research. Decision Quality positive medium classification of player state (risk level) and triggering of interventions (classification metrics, system outputs)
0.07
Limit‑setting and self‑exclusion tools informed by algorithms have been prototyped or implemented to provide algorithmically informed limits, reminders, and automated self‑exclusion pathways. Other positive medium existence and functioning of algorithmic limit/self‑exclusion tools (prototype implementation; user interactions)
0.07
Reported benefits include improved detection of high‑risk behaviour patterns beyond self‑report. Output Quality positive medium detection/classification accuracy of high‑risk behaviour relative to self‑report (AUC, precision/recall comparisons)
0.07
There is potential for timely, personalized interventions (nudges/warnings) that could reduce harm, but causal evidence of long‑term effectiveness is limited. Consumer Welfare mixed medium intervention uptake and short‑term behavioural change (pilot outcomes) versus long‑term harm reduction (largely unmeasured)
0.07
Evaluation approaches in the reviewed literature varied widely, with many studies using retrospective accuracy metrics (AUC, precision/recall) rather than causal impact measures on harm reduction. Other null_result high type of evaluation used (retrospective predictive metrics vs causal designs)
0.12
Typical data used in studies are platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, and in some cases limited self‑report measures. Other null_result high data types employed in models (behavioral log variables, account metadata, self‑report presence)
0.12
Privacy and ethical concerns are substantial: continuous monitoring and sensitive behavioural inference raise privacy, surveillance, and misuse risks. Ai Safety And Ethics negative high privacy/ethical risk (qualitative concerns reported across studies)
0.12
Misclassification risks (false positives and false negatives) are a common limitation and can harm consumers by incorrectly restricting access or by failing to detect harm. Error Rate negative high model error rates and downstream consumer harm risk (false positive/negative impacts discussed)
0.12
Heterogeneous study designs, outcomes, and measures across the literature hinder quantitative meta‑analysis and synthesis of effectiveness. Other null_result high heterogeneity of study designs and outcome measures (qualitative / count of disparate metrics)
0.12
There is limited reporting on privacy safeguards, model interpretability, and external validity in the reviewed studies. Ai Safety And Ethics negative high frequency/extent of reporting on privacy safeguards and interpretability (qualitative reporting rate)
0.12
Research gaps include the need for robust causal evaluations (RCTs, field experiments), standardized metrics, transparency/interpretability, fairness analysis, and cross‑jurisdictional studies. Research Productivity null_result high presence of causal evaluations, standardized metrics, transparency and fairness analyses (qualitative absence)
0.12
If platforms successfully deploy effective deep technologies, they may gain competitive advantages (improved retention, regulatory compliance, reduced liability), potentially raising barriers to entry and increasing returns to scale for incumbents with large behavioural datasets. Market Structure positive medium competitive advantage and market concentration effects (theoretical/economic inference; empirical evidence limited)
0.07
Platforms with larger behavioural datasets can build more accurate risk models, making data a strategic asset and potentially concentrating market power. Market Structure positive medium model accuracy as a function of dataset size and implications for market power (theoretical / inferred)
0.07
There is a risk of regulatory arbitrage and spillovers: better detection on regulated platforms could drive problem gamblers to unregulated venues. Governance And Regulation negative low displacement of problem gambling to unregulated venues (speculative; not measured)
0.04
Operators and regulators should prioritize independent model audits, disclosure of data use, fairness/error rates, and field experiments to quantify causal impacts and heterogeneous effects. Governance And Regulation null_result high policy/research actions recommended (qualitative)
0.12

Notes