The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (5126 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Adoption Remove filter
Upfront costs for AI adoption are substantial: development, clinical validation, regulatory compliance, EHR integration, and ongoing monitoring.
Implementation and regulatory literature synthesized in the review documenting typical cost categories and reported expenditures for clinical AI projects.
high negative Will AI Replace Physicians in the Near Future? AI Adoption B... fixed and recurring implementation costs
Large language models (LLMs) suffer from hallucinations (fabricated facts), overconfidence, and unpredictable failure modes in open-ended tasks.
Technical papers and benchmarks on LLM factuality, calibration, and failure modes summarized in the review; empirical evaluations showing instances of fabricated outputs and calibration issues.
high negative Will AI Replace Physicians in the Near Future? AI Adoption B... factual accuracy of outputs; calibration (confidence vs accuracy); failure rate ...
Contemporary AI systems have no capacity for physical examination, sensorimotor procedures, or direct patient-contact diagnostics.
Technical limitations of CNNs and LLMs described in literature (lack of embodiment, no sensorimotor capabilities) and absence of credible empirical demonstrations of safe autonomous physical clinical procedures in reviewed studies.
high negative Will AI Replace Physicians in the Near Future? AI Adoption B... ability to perform physical exam / procedural tasks / direct patient-contact dia...
Current models exhibit poor out-of-distribution (OOD) generalization: performance degrades when inputs differ from training distributions.
Technical literature and robustness/domain-shift research reviewed in the paper documenting declines in model accuracy under domain shift and dataset changes.
high negative Will AI Replace Physicians in the Near Future? AI Adoption B... model accuracy/performance under domain shift / OOD inputs
High upfront costs and lack of tailored financing instruments are significant financial constraints on SME AI adoption.
Case studies, finance sector reports, and SME surveys cited in the review showing cost barriers and financing gaps; evidence descriptive rather than causal.
high negative Artificial Intelligence Adoption for Sustainable Development... upfront investment costs; access to tailored finance; adoption rates
Infrastructure deficits (unreliable power, inadequate broadband, limited local compute) materially constrain AI uptake by SMEs.
Policy reports and empirical studies in the literature documenting infrastructural limitations in LMIC contexts (including Botswana) that impede digital and AI deployment.
high negative Artificial Intelligence Adoption for Sustainable Development... infrastructure adequacy metrics (power reliability, broadband access); AI adopti...
Skills shortages (AI literacy, data science, digital management) are a primary constraint on SME AI adoption in developing economies.
Consistent findings across surveys, interviews, and case studies in the reviewed literature highlighting skill gaps as a common barrier; authors note multiple empirical sources pointing to this constraint.
high negative Artificial Intelligence Adoption for Sustainable Development... availability of AI-relevant skills; reported skills constraints limiting adoptio...
Heterogeneity in study designs and contexts within the literature limits direct comparability and generalizability of findings.
Limitation noted in the paper based on the authors' assessment of diversity across the 103 reviewed studies (varying methods, contexts, metrics).
high negative Models, applications, and limitations of the responsible ado... comparability/generalizability of evidence across studies
Institutional inertia, fragmented governance structures, limited technical capacity, and weak data stewardship impede scale‑up of AI systems in the public sector.
Thematic synthesis of barriers reported across empirical studies and institutional reports within the systematic review (103 items).
high negative Models, applications, and limitations of the responsible ado... ability to scale AI systems / scale‑up rate
Low‑ and middle‑income contexts face persistent gaps—infrastructure, data ecosystems, and talent retention—that slow AI adoption in public governance.
Consistent findings across multiple studies in the 103‑item corpus reporting infrastructure deficits, weak data ecosystems, and brain drain/retention issues in LMIC settings.
high negative Models, applications, and limitations of the responsible ado... rate/extent of AI adoption in public governance in low- and middle‑income contex...
On-Premise RAG requires internal technical capabilities (MLOps, infrastructure engineers) to maintain and update the system.
Organizational evaluation and implementation discussion noting operational responsibilities and skill requirements for on-prem deployment.
high negative An Empirical Study on the Feasibility Analysis of On-Premise... need for technical staff / internal capabilities (MLOps, infra)
On-Premise RAG incurs higher latency compared with cloud RAG.
Technology evaluations included measured system latency comparisons between architectures; exact latency values and statistical details not provided in summary.
high negative An Empirical Study on the Feasibility Analysis of On-Premise... system latency (response time)
On-Premise RAG requires upfront capital expenditure (hardware) and ongoing maintenance (operations, model updates, staff).
Organizational evaluations / cost accounting and implementation discussion indicating hardware, operations, and personnel requirements for on-prem deployment; specific cost figures not provided in summary.
high negative An Empirical Study on the Feasibility Analysis of On-Premise... upfront capital expenditure and ongoing maintenance costs and staffing needs
The January 2026 DoD AI Strategy memorandum establishes a Barrier Removal Board that provides expanded authority to waive established governance controls.
Primary source analysis: close reading of the Department of Defense January 2026 AI Strategy memorandum and related policy text (policy language describing the Barrier Removal Board and its waiver authorities). No sample size required; based on document text.
high negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... existence and authority of the Barrier Removal Board (waiver authority over gove...
Risks include bias and discrimination, opacity in decision-making, privacy and cybersecurity threats, liability gaps, and uneven distribution of benefits that can exacerbate inequality.
Compilation from academic and policy literature, regulatory gap analyses, and examples of problematic AI use cases identified in the report's sectoral review.
high negative AI Governance and Data Privacy: Comparative Analysis of U.S.... bias/discrimination incidents, decision-making opacity, privacy/cybersecurity in...
AI creates significant ethical, legal and distributional risks.
Review of policy documents, academic and policy literature, and documented examples of AI deployment across multiple sectors highlighting harms (bias, privacy breaches, liability gaps, unequal benefits).
high negative AI Governance and Data Privacy: Comparative Analysis of U.S.... ethical risks, legal gaps, and distributional outcomes (inequality)
Except for the EU, jurisdictions surveyed generally lack AI-specific energy-disclosure requirements.
Comparative analysis across eleven jurisdictions identifying presence/absence of AI-specific energy disclosure rules; EU singled out as having such requirements.
high negative The Global Landscape of Environmental AI Regulation: From th... existence of AI-specific energy disclosure rules (binary presence/absence by jur...
Regulatory regimes in the surveyed jurisdictions focus on training emissions more than on inference-phase energy consumption.
Regulatory mapping and lifecycle-phase analysis showing which phases (training vs inference) are covered by existing rules in the eleven jurisdictions.
high negative The Global Landscape of Environmental AI Regulation: From th... regulated lifecycle phase (training coverage vs inference coverage)
Current environmental governance across the eleven jurisdictions mapped in the paper is predominantly facility-level (data-center focused) rather than model-level.
Regulatory mapping: comparative legal/policy analysis across eleven jurisdictions identifying locus of existing rules (facility vs model).
high negative The Global Landscape of Environmental AI Regulation: From th... regulatory scope (proportion of jurisdictions with facility-level vs model-level...
Reliance on imperfect data and model assumptions can produce biased or misleading forecasts; careful validation, transparency about assumptions, and governance are necessary.
Risks & governance discussion in the paper raising this limitation and recommending practices (qualitative argumentation).
high negative AI-Based Predictive Skill Gap Analysis for Workforce Plannin... risk of biased or misleading forecasts arising from data/model limitations (qual...
Practical adoption challenges in African settings are substantial: limited digital infrastructure, sparse local computing capacity, weak regulatory frameworks for synthetic data use, and clinician skepticism about model validity.
Implementation and governance analyses, policy reports, and qualitative studies summarized in the review document infrastructural and regulatory barriers as well as clinician attitudes; evidence is interdisciplinary and largely descriptive, with varied geographic coverage and few large-scale empirical deployment studies.
high negative On the use of synthetic data for healthcare AI in Africa: Te... infrastructure availability (digital records, compute), regulatory maturity indi...
Fidelity gaps in synthetic data (missing rare events, distributional shifts, artefacts) create risks of misclassification and biased outcomes when models are deployed in real-world African clinical settings.
Synthesis of machine-learning evaluations and clinical validation studies identified in the literature review that document instances of missing rare events, distributional mismatch, and data artefacts in synthetic datasets; these studies link such fidelity gaps to degraded performance and biased predictions in downstream models. The review highlights case examples but does not provide pooled quantitative estimates.
high negative On the use of synthetic data for healthcare AI in Africa: Te... misclassification rates, biased prediction errors, distributional shifts between...
Significant financial and implementation barriers (infrastructure, staff, validation) risk worsening access inequities between well-resourced and low-resource providers.
Economic analyses, stakeholder surveys, and deployment trend reports synthesized in the paper showing higher upfront costs and validation burdens for adopters; no randomized trials.
high negative Framework for Government Policy on Agentic and Generative AI... access / equity disparities / adoption gap by resource level
Regulatory fragmentation and lack of harmonized standards increase compliance complexity for healthcare AI deployments.
Policy analyses, regulatory reviews, and industry reports synthesized in the paper describing divergent national/regional regulatory approaches and their operational consequences.
high negative Framework for Government Policy on Agentic and Generative AI... regulatory compliance complexity / administrative burden
Both open-source and proprietary approaches carry risks of algorithmic bias and fairness violations, especially when models are uncontrolled or poorly validated across populations.
Multiple peer-reviewed studies and audit reports summarized in the literature synthesis documenting bias/fairness issues across model types and populations.
high negative Framework for Government Policy on Agentic and Generative AI... bias / fairness metrics / differential performance across populations
Rural digital divides and uneven infrastructure constrain the reach of AI health solutions and risk exacerbating health inequities unless explicitly addressed.
Synthesis of infrastructure and equity literature, national connectivity data referenced in reviewed documents, and policy analyses included in the review period 2020–2025.
high negative Artificial Intelligence in Healthcare in Indonesia: Are We R... geographic disparities in digital infrastructure (broadband access, device avail...
Regulatory and governance frameworks for health AI in Indonesia are fragmented, with limited requirements for transparency/explainability and weak procurement/governance mechanisms.
Thematic analysis of national policy papers, SATUSEHAT governance reports, and regulatory documents identified in the 42 supplementary documents and literature review (2020–2025).
high negative Artificial Intelligence in Healthcare in Indonesia: Are We R... presence/strength of regulation and governance mechanisms (transparency requirem...
AI-generated code can introduce security vulnerabilities and raise licensing/intellectual-property concerns.
Case studies of security incidents, analyses of generated code provenance, and vulnerability-detection studies synthesized in the review.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in generated code; instances of license or...
LLMs sometimes generate incorrect, nonsensical, or insecure code (hallucinations).
Multiple benchmarks, code-generation accuracy tests, and incident case studies documented in the empirical literature showing incorrect or fabricated outputs.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate; incidence of hallucinated outputs (false or fabrica...
Data security, privacy risks, unequal gains, and regulatory shortfalls can undermine the benefits of AI/robotics adoption.
Policy and risk analyses from secondary literature, case studies, and institutional reports synthesized in the paper; examples cited but no original incident-level dataset or incidence rates provided.
high negative AI and Robotics Redefine Output and Growth: The New Producti... data/privacy risk incidence, inequality measures, regulatory adequacy (qualitati...
Transition frictions and skills mismatches are important barriers to workers moving into newly created AI‑related roles.
Qualitative review of workforce and skills literature, case studies, and sector reports; evidence comes from secondary sources with varied methodologies; the paper does not report pooled quantitative estimates.
high negative AI and Robotics Redefine Output and Growth: The New Producti... transition costs, skills mismatch incidence, retraining needs (labor market fric...
International and national legal approaches to these stages are fragmented, creating uncertainty for IP, privacy, liability and evidence law.
Comparative review of international and national legal approaches and judicial responses cited in the paper (secondary legal sources).
high negative Ethical and societal challenges to the adoption of generativ... degree of fragmentation and legal uncertainty across jurisdictions
Output-stage risks include authenticity/deception concerns, attribution and reuse-rights disputes, reputational harms, and broader societal impacts from abundant generated media.
Review of empirical studies on media authenticity, legal cases, and policy analyses included in the narrative review.
high negative Ethical and societal challenges to the adoption of generativ... authenticity, deception potential, attribution disputes, reputational and societ...
Process-stage risks include governance of model development, control over deployment, transparency, auditing, and operational safety.
Conceptual synthesis of technical governance literature and policy reports cited in the narrative review.
high negative Ethical and societal challenges to the adoption of generativ... governance and operational safety concerns in model development/deployment
Input-stage risks include concerns about consent, copyright, representativeness, bias, provenance and data ownership for training material.
Synthesis of legal and policy literature and documented legal cases/statutes related to training data and IP/privacy issues (secondary sources only).
high negative Ethical and societal challenges to the adoption of generativ... legal/ethical compliance and risk factors in training datasets
Generative audiovisual AI poses material ethical, control, transparency and legal challenges across three stages — input (training data), process (development & deployment), and output (use of artifacts).
Conceptual three-stage framework built from comparative review of literature, legal cases/statutes and policy reports described in the paper.
high negative Ethical and societal challenges to the adoption of generativ... presence and types of ethical, governance, transparency and legal risks across i...
Limitations of the study include potential selection bias in reviewed sources and contingency of conclusions on evolving legal decisions and technology developments.
Author-stated limitations section within the paper; qualitative acknowledgement rather than empirical bias assessment.
high negative Ethical and societal challenges to the adoption of generativ... reliability and generalizability of the review's conclusions
Output-stage risks include challenges to authenticity and provenance, erosion of trust (deepfakes and misinformation), and potential legal liability for harms caused by generated content.
Synthesis of technical papers on deepfakes, legal analyses of liability, and policy reports referenced in the review; no original incident dataset or quantitative prevalence estimate included.
high negative Ethical and societal challenges to the adoption of generativ... authenticity/provenance verification success, consumer trust, incidence of misin...
Input-stage risks include copyright infringement, lack of consent, poor data provenance, and biases/representational harms encoded in training datasets.
Review and synthesis of academic and legal literature on training data issues; examples and case law discussed, but no original dataset audit or sample counts provided.
high negative Ethical and societal challenges to the adoption of generativ... legal/compliance risk and bias in generated outputs arising from training data
Use of these models faces significant ethical, control, transparency, and legal challenges across three stages—input (training data), process (development/control), and output (generated artifacts).
Framework constructed from interdisciplinary literature (technical, ethical, legal sources) and review of statutes/judicial approaches; qualitative synthesis rather than primary data.
high negative Ethical and societal challenges to the adoption of generativ... presence and severity of ethical/legal/control challenges across input/process/o...
High environmental constraints in many African regions (poor infrastructure, challenging geography, frequent climate shocks) materially affect logistics, resilience, and supply-chain performance.
Review of literature on infrastructure, geography, and climate impacts in the conceptual paper.
high negative Continental shift: operations and supply chain management re... infrastructure and environmental constraints' impact on logistics/resilience
Africa is abundant in natural resources but exhibits relatively low development/outcomes from those resources, creating resource allocation and value-capture problems relevant to OSCM.
Development economics and regional studies literature cited in the paper's synthesis; conceptual claim without new empirical testing.
high negative Continental shift: operations and supply chain management re... resource endowment versus development outcomes (value capture in supply chains)
Africa has a large informal economy and many informal organizations that shape supply-chain behavior and market functioning.
Literature synthesis citing development and institutional studies (no primary data collection in the paper).
high negative Continental shift: operations and supply chain management re... prevalence of informality and its influence on supply-chain behavior
Results reflect small-scale e-commerce use cases; external validity to larger firms, other sectors, or more complex tasks is not established.
Scope of deployments limited to small-scale e-commerce settings as stated in methods; no cross-sector or large-firm samples reported in summary.
high negative Artificial Intelligence Agents in Knowledge Work: Transformi... generalisability/external validity of observed productivity effects
The study's evidence is observational rather than randomized controlled trials, so causal estimates about productivity impacts are suggestive rather than definitive.
Declared study design: applied experimentation and observational analysis of deployments (no randomized assignment); methods section explicitly notes observational limitation.
high negative Artificial Intelligence Agents in Knowledge Work: Transformi... strength of causal inference (ability to attribute observed productivity changes...
High upfront costs, weak digital/physical infrastructure, limited access to credit, low digital literacy, insecure land tenure, and sociocultural factors (including gendered access) limit uptake of digital and precision technologies among smallholders.
Consistent findings across program evaluations, qualitative stakeholder interviews, participatory assessments, and case studies cited in the synthesis.
high negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION technology adoption rates (uptake), barriers to adoption
Limited access to capital, data, digital infrastructure, skills, and insecure land tenure reduce adoption rates for advanced innovations among smallholders.
Multiple empirical studies and program evaluations synthesized in the review documenting adoption barriers; policy review identifying structural constraints across regions.
high negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION adoption rates of AI/IoT/precision tools, uptake of new practices
Integrating AI raises questions of accountability, transparency, fairness, privacy, and bias; managerial responsibility includes governance design, validation, and audit of AI decisions.
Normative and governance-focused synthesis citing ethical frameworks and illustrative cases; identifies governance tasks and validation/audit needs rather than empirical prevalence rates.
high negative Modern Management in the Age of Artificial Intelligence: Str... presence and quality of AI governance mechanisms (accountability frameworks, tra...
Generated code can introduce security vulnerabilities.
Security analyses and code audits documenting examples where LLM-generated code contains known vulnerability patterns; incident-oriented case studies and controlled experiments assessing vulnerability incidence.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in AI-generated code
LLMs can produce plausible-looking but incorrect or insecure code (so-called 'hallucinations').
Benchmarks and controlled tests demonstrating incorrect outputs; security analyses and replicated examples showing erroneous or insecure snippets produced by LLMs across multiple models and prompts.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate and frequency of insecure code returned