The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4793 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Productivity Remove filter
Infrastructure deficits (connectivity, legacy systems) limit scale and reliability of digital/AI initiatives.
Recurring barrier documented across governance analyses and case studies; evidence includes reports of downtime, integration failures, and limited geographic reach; no unified cross-study sample provided.
medium negative Digital Transformation and AI Adoption in Government: Evalua... system reliability/uptime, scalability, geographic/service coverage
Unresolved liability and regulatory uncertainty increase malpractice risk and insurance costs, leading insurers and providers to favor conservative adoption and continued human-in-the-loop safeguards.
Regulatory/legal analysis and stakeholder behavior models discussed in the review; observed cautious deployment patterns in practice noted in the literature.
medium negative Will AI Replace Physicians in the Near Future? AI Adoption B... malpractice risk; insurance premiums; adoption conservatism; presence of human-i...
Regulatory pathways and approval standards are evolving but are not yet aligned with deployment of high-autonomy clinical systems.
Review of recent policy analyses and regulatory documents showing ongoing updates and gaps between current standards and requirements for high-autonomy AI deployment.
medium negative Will AI Replace Physicians in the Near Future? AI Adoption B... alignment between regulatory frameworks and high-autonomy clinical deployment re...
Robust, locally appropriate data governance (privacy, interoperability, standards) is a public good that underpins trust and data-driven markets; weak governance raises risks of exclusion and foreign dependency.
Governance and policy literature synthesized in the review; conceptual arguments supported by examples but limited empirical evaluation in LMIC SME contexts.
medium negative Artificial Intelligence Adoption for Sustainable Development... data governance robustness; SME inclusion in data-driven markets; foreign depend...
Platform effects and supplier ecosystems associated with AI may create winner-takes-most market dynamics, so policy should monitor market concentration and enable competitive access to core AI services.
Literature on platforms and market structure combined with case examples; review notes potential for concentration but lacks broad causal studies quantifying effects in LMIC SME markets.
medium negative Artificial Intelligence Adoption for Sustainable Development... market concentration metrics; access to core AI services by SMEs
Fragmented or weak data governance (privacy rules, standards, interoperability, and trust) reduces SMEs’ ability to participate in data-driven markets and adopt AI.
Policy analyses and governance-focused studies in the review highlighting data governance weaknesses in LMICs and associated risks for SMEs; examples discussed rather than quantified nationally.
medium negative Artificial Intelligence Adoption for Sustainable Development... data governance quality; SME participation in data markets; trust/interoperabili...
Scalability and rapid model improvements provided by cloud vendors are harder to capture on-premise.
Comparative discussion in TOE analysis about vendor-managed continuous model improvements and cloud scalability versus on-prem constraints; not backed by longitudinal empirical comparison in the summary.
medium negative An Empirical Study on the Feasibility Analysis of On-Premise... ability to capture rapid model improvements and scalability
Sanctions and supply-chain restrictions affect access to hardware and software, altering adoption paths and increasing costs; domestic substitution or international cooperation will influence future trajectories.
Institutional analysis documenting sanctions/import restrictions and their implications for hardware/software access; qualitative assessment of substitution and cooperation options.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... availability and cost of hardware/software inputs for AI and resulting adoption ...
The barriers to AI adoption in Russia’s extractive industries interact systemically (e.g., lack of data reduces demand for talent; weak infrastructure deters investment), so piecemeal measures will have limited effect.
Analytical synthesis identifying co-moving constraints across cross-country trends and qualitative firm-level evidence showing interacting bottlenecks.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... overall effectiveness of isolated vs. coordinated interventions on AI diffusion ...
Institutional failures—weak standards/interoperability, limited public–private coordination, regulatory uncertainty, and sanctions/import restrictions—exacerbate diffusion problems for AI in extractive sectors.
Institutional review of standards, procurement and public–private coordination mechanisms; documentation of regulatory uncertainty and sanctions/import restrictions affecting hardware/software access.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... standards/interoperability quality, level of public–private coordination, regula...
Infrastructure shortfalls — insufficient sensorization, limited connectivity (edge/cloud), inadequate computing hardware and immature localized software stacks — are underdeveloped in Russia relative to peers and hinder deployment.
ICT infrastructure indicators, comparative metrics on sensorization/connectivity/computing availability, and project case evidence from extractive firms.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... sensor density, connectivity quality (edge/cloud readiness), availability of com...
There are human capital constraints: shortages of AI talent in industry-specific roles, limited retraining of engineering staff, and brain drain reduce the sector's capacity to absorb and deploy AI.
Workforce and education statistics, patent/activity counts, and expert commentary; qualitative case evidence showing limited retraining and talent shortages in industry-specific AI roles.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... industry-specific AI talent supply, retraining rates for engineering staff, meas...
Absolute and relative AI investment volumes in the Russian extractive sector are lower than in the US, China and EU; private risk capital is limited and public support insufficiently targeted to scale-up projects.
Investment datasets and national/industry statistics comparing public and private AI investment volumes (absolute and relative to output) for extractive sectors across jurisdictions (2020–2025).
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... AI investment volumes (absolute and per unit of extractive output); availability...
Data access is a primary bottleneck: datasets are fragmented, often proprietary or closed, ownership rules are unclear, and mechanisms for safe data sharing are weak, hindering model training and cross-firm applications.
Review of data governance frameworks across jurisdictions and firm-level case evidence documenting closed/proprietary datasets and weak sharing mechanisms.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... availability and usability of industrial data for AI model training and cross-fi...
The gap is driven not only by smaller investment flows but also by institutional constraints—limited data access, weak data governance, human capital shortages, and inadequate digital infrastructure—that together suppress diffusion and scaling of AI applications.
Institutional analysis (review of data governance frameworks, regulatory regimes, standards, market structure) plus qualitative firm-level case studies and expert commentary illustrating how these factors impede adoption and scaling.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... diffusion and scaling of AI applications in extractive industries
Russia’s adoption of AI in extractive industries is both slower (lower growth rate) and shallower (lower depth of digitalization) than peer jurisdictions in 2020–2025.
Time-series comparison of digitalization/digit maturity proxies and AI investment volumes across countries for 2020–2025; synthesis of trend differences from public datasets and sectoral indices.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... rate of change in digitalization indicators and depth of digitalization (digit m...
Between 2020–2025 Russia trails the United States, China and the EU on both digitalization indicators and AI investment volumes in the mining and oil & gas sectors.
Comparative multi-country trend analysis (2020–2025) using publicly available investment and digitalization indicators: national/industry statistics, investment datasets, and sectoral digitalization indices comparing Russia, US, China and EU over 2020–2025.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... digitalization levels and AI investment volumes per unit of extractive output (m...
Regulatory fragmentation increases compliance costs and stifles cross-border scale economies; international coordination and mutual recognition of standards can lower trade costs.
Comparative governance analysis and economic reasoning about cross-border trade and compliance; no cross-country causal estimates provided in the report.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... compliance costs, cross-border scale economies, trade costs
Large incumbents with data/network advantages may entrench market power.
Policy and literature review noting data/network effects, observed tendencies in tech markets; sectoral examples discussed in the report.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... market power metrics, entry barriers, data advantage effects
Without targeted policy, AI can amplify winner-take-all dynamics (market concentration, superstar firms) and spatial inequalities (urban vs. rural).
Theoretical economic arguments and review of literature on data/network effects and concentration; comparative policy analysis that raises distributional concerns.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... market concentration, firm market shares, spatial inequality indicators
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... gap between policy intent and lived experience; social exposure to harm
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... market concentration; competition; barriers to entry
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... incidence of negative externalities (data exploitation, discriminatory automatio...
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... presence of communication-rights considerations; regulatory blind spots; inequal...
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... inclusion of vulnerable groups in regulatory attention; exposure to harm
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... policy enforceability and accountability
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
medium negative Framework for Government Policy on Agentic and Generative AI... workflow burden / frequency of appeals / process failures
Failing to retrain health workers for AI will produce structural labor-market mismatches, slow adoption, and reduce realized economic benefits.
Labor-market analysis and workforce readiness findings from the narrative synthesis and Delphi inputs; argument is inferential based on observed skill gaps and adoption barriers in the reviewed literature.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... adoption rates of AI tools, productivity gains, workforce skill alignment metric...
Indonesia risks technological dependency on foreign vendors if domestic capability, data governance, and procurement are not strengthened.
Market and policy assessment from the review, including procurement analyses and discussion in supplementary national reports and Delphi studies; based on observed market structures and procurement practices identified in the literature.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... degree of market reliance on foreign AI vendors / domestic market share
Approximately 58.7% of the relevant Indonesian health workforce lacks the AI competence or literacy needed for safe, scalable adoption.
Workforce readiness estimate derived from reviewed workforce assessments, Delphi consensus studies, and national reports included in the narrative synthesis; the summary does not specify sample frames or exact survey instruments that produced the 58.7% figure.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... percent of health workforce lacking AI competence/literacy
Indonesia’s AI healthcare maturity score is approximately 52/100, trailing regional peers (example comparators: Singapore ≈ 92, Malaysia ≈ 78).
Benchmarking performed in the review against regional maturity catalogues and international standards (EU AI Act, Singapore, Australia); maturity scoring method referenced in the paper but detailed scoring rubric and underlying metrics not fully reproduced in the summary.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... composite AI-health system maturity score (0–100)
Widespread adoption of LLMs without adequate verification increases systemic cybersecurity risks with potential economic spillovers.
Synthesis of security incident case studies and risk analyses revealing vulnerabilities in generated code and potential downstream impacts.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... frequency/severity of security breaches attributable to AI-generated code; downs...
Models lack deep contextual reasoning and may fail on tasks requiring long-term design thinking or deep domain knowledge.
Benchmark failures and user studies in the reviewed literature demonstrating degraded performance on complex architectural/design tasks and domain-specific reasoning problems.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... task success on long-horizon design tasks, reasoning/chain-of-thought benchmark ...
Use of these tools can mask gaps in foundational computational skills among novices.
Pedagogical case studies and assessments indicating reliance on AI can produce superficial solutions and lower demonstrated understanding of core concepts.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... measures of foundational skill (conceptual quiz scores, ability to solve novel/u...
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
medium negative Ethical and societal challenges to the adoption of generativ... existence of externalities and scope for public policy interventions
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
medium negative Ethical and societal challenges to the adoption of generativ... compliance costs, market entry barriers, diffusion rates
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
medium negative Ethical and societal challenges to the adoption of generativ... attention per item, willingness-to-pay, content monetization
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration, vertical integration, strategic advantage for data/infrast...
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... social-welfare losses from misinformation and trust erosion
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration (e.g., HHI), entry rates, and barriers to entry
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
medium negative Ethical and societal challenges to the adoption of generativ... degree of legal fragmentation across jurisdictions (differences in statutes, enf...
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration of model capabilities and levels of governance/transparency
Key data challenges in African contexts are measurement error, censoring, selection bias (informal actors absent from official datasets), privacy/ethical concerns, and limited digital trace coverage in some regions.
Methodological critique synthesised from literature in the paper.
medium negative Continental shift: operations and supply chain management re... threats to data quality and representativeness for empirical studies
Short-term AI adoption costs and adjustment reduce firm profits during early adoption phases.
Theoretical model predictions from the differentiated Bertrand framework; empirical component claims alignment with these short-run effects (no sample size or estimation details given in summary).
medium negative MODELING HOSPITALITY AND TOURISM STRATEGIES short-run firm profit (profit reduction)
Key constraints on realized gains include governance complexity, model reliability limits (errors, brittleness, distribution shifts), orchestration challenges integrating agents across systems, and ongoing need for human oversight for safety, fairness, and quality control.
Qualitative observations and limitations reported from the Alfred AI deployments and authors' analysis of operational experience; evidence comes from live deployments but is descriptive rather than quantitative.
medium negative Artificial Intelligence Agents in Knowledge Work: Transformi... presence and impact of governance complexity, model errors, orchestration diffic...
This generation–verification mismatch produces a chronic bottleneck in development processes.
Analytic diagnosis and behavioral reasoning in the paper (design principles and system analysis); no empirical testing or simulation results provided.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... development process throughput constrained by verification capacity
AI-assisted software development creates a persistent structural imbalance: generation throughput (machine-produced code, tests, docs) outpaces human verification capacity.
Conceptual/theoretical argument and systems/architectural modeling in the paper; no empirical measurement, no sample size, no field data reported.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... ratio of machine generation throughput to human verification throughput / verifi...
Data‑driven agritech platforms exhibit network effects and potential for market power, implying a policy need for data portability and interoperability to preserve competition.
Economic reasoning, policy reports, and case study examples summarized in the review; the claim is grounded in market analysis rather than large‑scale causal studies.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION market concentration, barriers to entry, interoperability metrics
If left unregulated and untargeted, AI and digital agritech platforms risk concentrating surplus with technology providers and capital owners, potentially increasing rural inequality and weakening smallholder bargaining power.
Theoretical market‑structure analysis, case studies of platform markets, and policy analyses cited in the paper; empirical causal evidence on long‑run distributional effects is limited.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION distribution of surplus/value capture, measures of rural inequality, smallholder...
Data ownership, lack of interoperability, privacy concerns, and concentration of digital agritech platforms create risks for competition and equitable value capture in agricultural value chains.
Policy reports, market analyses, and case studies discussed in the paper; the claim is supported by descriptive evidence and theoretical assessments rather than large causal estimates.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION market concentration, distribution of surplus/value capture, competition indicat...