The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5539 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Adoption Remove filter
The memo departs from the Department's prior lifecycle-assurance framework and substitutes different standards while elevating vague criteria (e.g., 'model objectivity') without operational definitions or evaluation methods.
Primary source comparison: close reading of the January 2026 memo versus prior DoD lifecycle-assurance documents; identification of new/changed terminology and lack of accompanying operational definitions or test methods in the policy text.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... clarity and operationalization of procurement standards (presence/absence of def...
By centralizing waiver decisions in a Barrier Removal Board, the memo converts baseline governance controls into exception-driven permissions (i.e., governance becomes something to be suspended rather than enforced).
Qualitative institutional analysis and primary-source reading of the memo establishing a centralized waiver process; mapping of how waiver mechanisms interact with existing assurance processes (ATO, T&E, contracting). No quantitative measurement of waiver frequency provided.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... status of governance controls (baseline enforcement vs. exception/waiver-driven)
The memo explicitly frames governance and procurement speed as a zero-sum tradeoff and labels long-standing oversight mechanisms (Authorities to Operate, test & evaluation, contracting reviews) as 'blockers' eligible for waiver.
Primary source analysis: textual interpretation of the memo and accompanying contracting directives that characterize oversight mechanisms as impediments and make them eligible for waiver. Evidence is documentary (policy text); no quantitative sample.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... framing of governance vs. speed in policy language; designation of specific over...
Regulatory fragmentation increases compliance costs and stifles cross-border scale economies; international coordination and mutual recognition of standards can lower trade costs.
Comparative governance analysis and economic reasoning about cross-border trade and compliance; no cross-country causal estimates provided in the report.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... compliance costs, cross-border scale economies, trade costs
Large incumbents with data/network advantages may entrench market power.
Policy and literature review noting data/network effects, observed tendencies in tech markets; sectoral examples discussed in the report.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... market power metrics, entry barriers, data advantage effects
Without targeted policy, AI can amplify winner-take-all dynamics (market concentration, superstar firms) and spatial inequalities (urban vs. rural).
Theoretical economic arguments and review of literature on data/network effects and concentration; comparative policy analysis that raises distributional concerns.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... market concentration, firm market shares, spatial inequality indicators
Without international coordination, providers may relocate compute or obscure compute locations to avoid stricter regimes; harmonized rules reduce these distortions.
Regulatory mapping and economic reasoning about geographic investment, regulatory arbitrage, and compute-location disclosure incentives.
medium negative The Global Landscape of Environmental AI Regulation: From th... likelihood of compute relocation or obfuscation (probability or incidence) and e...
Compliance and reporting requirements will impose additional costs on firms, with small providers likely disproportionately affected unless rules are proportionate.
Policy analysis of compliance and transaction costs (qualitative assessment of administrative burden and scale effects).
medium negative The Global Landscape of Environmental AI Regulation: From th... incremental compliance/reporting costs and distributional impact across firm siz...
The facility-level focus and training-phase emphasis of current governance limit regulators' ability to monitor and mitigate the full environmental externalities of modern AI systems.
Synthesis of empirical findings on model/inference impacts combined with regulatory mapping showing gaps between impact locus and regulatory reach.
medium negative The Global Landscape of Environmental AI Regulation: From th... regulatory coverage gap (degree to which regulatory instruments capture model-le...
Transparency about AI environmental impacts has declined even as deployments of generative models have accelerated, creating an information gap for regulators, users, and researchers.
Trend observations from collated operational datasets and cited empirical studies indicating reduced disclosure by providers alongside increased deployments; supported by regulatory mapping noting scant AI-specific reporting outside the EU.
medium negative The Global Landscape of Environmental AI Regulation: From th... availability/quality of environmental impact disclosures (presence/absence and g...
The larger cumulative environmental impacts of these generative models are primarily driven by inference-phase (online serving) energy consumption rather than training-phase emissions.
Evidence synthesis and operational data analysis focusing on deployment/inference patterns and relative contribution of lifecycle phases in examined models.
medium negative The Global Landscape of Environmental AI Regulation: From th... share of total energy use and emissions attributable to inference versus trainin...
Generative web-search and reasoning AI models deployed widely in 2025 impose substantially higher cumulative environmental costs than earlier AI generations, largely driven by inference at scale.
Evidence synthesis: collation of empirical studies and operational data comparing energy and emissions profiles of 2025-era model families and deployment patterns (paper-wide comparative accounting).
medium negative The Global Landscape of Environmental AI Regulation: From th... cumulative environmental costs (energy consumption and greenhouse gas emissions ...
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... gap between policy intent and lived experience; social exposure to harm
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... market concentration; competition; barriers to entry
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... incidence of negative externalities (data exploitation, discriminatory automatio...
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... presence of communication-rights considerations; regulatory blind spots; inequal...
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... inclusion of vulnerable groups in regulatory attention; exposure to harm
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... policy enforceability and accountability
Fidelity-related biases risk concentrating harms among underrepresented populations, potentially increasing healthcare costs and welfare losses; economic evaluation and auditing for distributional impacts should be integrated into procurement and reimbursement decisions.
Interdisciplinary evidence synthesized from ML fairness studies, clinical validation reports, and health economics literature included in the review; the review recommends integrating distributional analysis in HTA based on documented risks of differential model performance.
medium negative On the use of synthetic data for healthcare AI in Africa: Te... differential error rates across subpopulations, distributional welfare impacts, ...
Weak or absent regulation increases uncertainty and may deter investment or lead to adoption of low-quality synthetic products with negative economic and clinical externalities.
Policy literature and implementation case studies summarized in the review that link regulatory gaps to investment risk and potential for low-quality product adoption; evidence is mostly inferential and descriptive.
medium negative On the use of synthetic data for healthcare AI in Africa: Te... investment levels, prevalence of low-quality products, clinical/economic externa...
Without improvements in fidelity and domain adaptation, synthetic data risks introducing bias and limiting clinical and economic benefits.
Integrated assessment from machine-learning evaluations, clinical validation studies, and implementation analyses within the review which link fidelity and domain mismatch to biased model outputs and reduced clinical utility; economic implications are inferred from cost-effectiveness and procurement literature cited in the review.
medium negative On the use of synthetic data for healthcare AI in Africa: Te... distributional bias, clinical utility (e.g., diagnostic accuracy, decision impac...
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
medium negative Framework for Government Policy on Agentic and Generative AI... workflow burden / frequency of appeals / process failures
Failing to retrain health workers for AI will produce structural labor-market mismatches, slow adoption, and reduce realized economic benefits.
Labor-market analysis and workforce readiness findings from the narrative synthesis and Delphi inputs; argument is inferential based on observed skill gaps and adoption barriers in the reviewed literature.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... adoption rates of AI tools, productivity gains, workforce skill alignment metric...
Indonesia risks technological dependency on foreign vendors if domestic capability, data governance, and procurement are not strengthened.
Market and policy assessment from the review, including procurement analyses and discussion in supplementary national reports and Delphi studies; based on observed market structures and procurement practices identified in the literature.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... degree of market reliance on foreign AI vendors / domestic market share
Approximately 58.7% of the relevant Indonesian health workforce lacks the AI competence or literacy needed for safe, scalable adoption.
Workforce readiness estimate derived from reviewed workforce assessments, Delphi consensus studies, and national reports included in the narrative synthesis; the summary does not specify sample frames or exact survey instruments that produced the 58.7% figure.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... percent of health workforce lacking AI competence/literacy
Indonesia’s AI healthcare maturity score is approximately 52/100, trailing regional peers (example comparators: Singapore ≈ 92, Malaysia ≈ 78).
Benchmarking performed in the review against regional maturity catalogues and international standards (EU AI Act, Singapore, Australia); maturity scoring method referenced in the paper but detailed scoring rubric and underlying metrics not fully reproduced in the summary.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... composite AI-health system maturity score (0–100)
Widespread adoption of LLMs without adequate verification increases systemic cybersecurity risks with potential economic spillovers.
Synthesis of security incident case studies and risk analyses revealing vulnerabilities in generated code and potential downstream impacts.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... frequency/severity of security breaches attributable to AI-generated code; downs...
Models lack deep contextual reasoning and may fail on tasks requiring long-term design thinking or deep domain knowledge.
Benchmark failures and user studies in the reviewed literature demonstrating degraded performance on complex architectural/design tasks and domain-specific reasoning problems.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... task success on long-horizon design tasks, reasoning/chain-of-thought benchmark ...
Use of these tools can mask gaps in foundational computational skills among novices.
Pedagogical case studies and assessments indicating reliance on AI can produce superficial solutions and lower demonstrated understanding of core concepts.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... measures of foundational skill (conceptual quiz scores, ability to solve novel/u...
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
medium negative Ethical and societal challenges to the adoption of generativ... existence of externalities and scope for public policy interventions
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
medium negative Ethical and societal challenges to the adoption of generativ... compliance costs, market entry barriers, diffusion rates
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
medium negative Ethical and societal challenges to the adoption of generativ... attention per item, willingness-to-pay, content monetization
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration, vertical integration, strategic advantage for data/infrast...
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... social-welfare losses from misinformation and trust erosion
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration (e.g., HHI), entry rates, and barriers to entry
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
medium negative Ethical and societal challenges to the adoption of generativ... degree of legal fragmentation across jurisdictions (differences in statutes, enf...
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration of model capabilities and levels of governance/transparency
Key data challenges in African contexts are measurement error, censoring, selection bias (informal actors absent from official datasets), privacy/ethical concerns, and limited digital trace coverage in some regions.
Methodological critique synthesised from literature in the paper.
medium negative Continental shift: operations and supply chain management re... threats to data quality and representativeness for empirical studies
Short-term AI adoption costs and adjustment reduce firm profits during early adoption phases.
Theoretical model predictions from the differentiated Bertrand framework; empirical component claims alignment with these short-run effects (no sample size or estimation details given in summary).
medium negative MODELING HOSPITALITY AND TOURISM STRATEGIES short-run firm profit (profit reduction)
This generation–verification mismatch produces a chronic bottleneck in development processes.
Analytic diagnosis and behavioral reasoning in the paper (design principles and system analysis); no empirical testing or simulation results provided.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... development process throughput constrained by verification capacity
AI-assisted software development creates a persistent structural imbalance: generation throughput (machine-produced code, tests, docs) outpaces human verification capacity.
Conceptual/theoretical argument and systems/architectural modeling in the paper; no empirical measurement, no sample size, no field data reported.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... ratio of machine generation throughput to human verification throughput / verifi...
Data‑driven agritech platforms exhibit network effects and potential for market power, implying a policy need for data portability and interoperability to preserve competition.
Economic reasoning, policy reports, and case study examples summarized in the review; the claim is grounded in market analysis rather than large‑scale causal studies.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION market concentration, barriers to entry, interoperability metrics
If left unregulated and untargeted, AI and digital agritech platforms risk concentrating surplus with technology providers and capital owners, potentially increasing rural inequality and weakening smallholder bargaining power.
Theoretical market‑structure analysis, case studies of platform markets, and policy analyses cited in the paper; empirical causal evidence on long‑run distributional effects is limited.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION distribution of surplus/value capture, measures of rural inequality, smallholder...
Data ownership, lack of interoperability, privacy concerns, and concentration of digital agritech platforms create risks for competition and equitable value capture in agricultural value chains.
Policy reports, market analyses, and case studies discussed in the paper; the claim is supported by descriptive evidence and theoretical assessments rather than large causal estimates.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION market concentration, distribution of surplus/value capture, competition indicat...
Differences in access to AI tools and digital infrastructure could exacerbate global and within-country inequalities in research capacity and outputs.
Statement in Distributional and Competitive Effects. Motivated by observed heterogeneity in infrastructure and access; abstract does not provide empirical heterogeneity estimates or samples.
medium negative Artificial Intelligence for Improving Research Productivity ... access to AI tools/infrastructure, disparities in research outputs (publication ...
Institutions that adopt and integrate AI effectively may gain disproportionate advantages, increasing stratification in academic prestige and funding.
Presented as a distributional/competitive implication. Based on theory and possibly institutional case studies; no causal evidence or quantitative estimates provided in the abstract.
medium negative Artificial Intelligence for Improving Research Productivity ... changes in institutional prestige/rankings, funding allocation shifts, measures ...
Security vulnerabilities and IP leakage create negative externalities; absent internalization, social costs (breaches, legal disputes) may rise.
Security analyses, documented incidents, and economic externality reasoning synthesized from the literature; empirical quantification of social cost is limited.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... social costs from security breaches and IP disputes (incidence and severity)
Generated code may incidentally reproduce copyrighted or licensed snippets from training data.
Analyses detecting verbatim or near-verbatim reproductions of licensed/copyrighted code in model outputs in selected tests and audits; evidence heterogeneous and depends on prompts and model/data.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... frequency of reproduced copyrighted/licensed code in outputs
Outputs often lack deep, project-level contextual reasoning (e.g., design tradeoffs, architecture constraints).
Qualitative failure-mode analyses, user studies, and benchmark tasks showing limitations in system-level reasoning and context-aware design decisions; evidence from short-horizon labs and case studies.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... ability to produce context-appropriate architectural/design decisions
There is a risk of shallow learning if learners over-rely on AI outputs without understanding fundamentals.
Educational studies and observational analyses indicating reduced engagement with underlying concepts for some learners using AI assistance, plus qualitative reports from instructors; studies often short-term.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... depth of conceptual understanding and learning outcomes