The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4333 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Governance Remove filter
Transparency about AI environmental impacts has declined even as deployments of generative models have accelerated, creating an information gap for regulators, users, and researchers.
Trend observations from collated operational datasets and cited empirical studies indicating reduced disclosure by providers alongside increased deployments; supported by regulatory mapping noting scant AI-specific reporting outside the EU.
medium negative The Global Landscape of Environmental AI Regulation: From th... availability/quality of environmental impact disclosures (presence/absence and g...
The larger cumulative environmental impacts of these generative models are primarily driven by inference-phase (online serving) energy consumption rather than training-phase emissions.
Evidence synthesis and operational data analysis focusing on deployment/inference patterns and relative contribution of lifecycle phases in examined models.
medium negative The Global Landscape of Environmental AI Regulation: From th... share of total energy use and emissions attributable to inference versus trainin...
Generative web-search and reasoning AI models deployed widely in 2025 impose substantially higher cumulative environmental costs than earlier AI generations, largely driven by inference at scale.
Evidence synthesis: collation of empirical studies and operational data comparing energy and emissions profiles of 2025-era model families and deployment patterns (paper-wide comparative accounting).
medium negative The Global Landscape of Environmental AI Regulation: From th... cumulative environmental costs (energy consumption and greenhouse gas emissions ...
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... gap between policy intent and lived experience; social exposure to harm
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... market concentration; competition; barriers to entry
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... incidence of negative externalities (data exploitation, discriminatory automatio...
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... presence of communication-rights considerations; regulatory blind spots; inequal...
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... inclusion of vulnerable groups in regulatory attention; exposure to harm
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
medium negative Promising Protection, Producing Exposure: AI Ethics and Mobi... policy enforceability and accountability
Fidelity-related biases risk concentrating harms among underrepresented populations, potentially increasing healthcare costs and welfare losses; economic evaluation and auditing for distributional impacts should be integrated into procurement and reimbursement decisions.
Interdisciplinary evidence synthesized from ML fairness studies, clinical validation reports, and health economics literature included in the review; the review recommends integrating distributional analysis in HTA based on documented risks of differential model performance.
medium negative On the use of synthetic data for healthcare AI in Africa: Te... differential error rates across subpopulations, distributional welfare impacts, ...
Weak or absent regulation increases uncertainty and may deter investment or lead to adoption of low-quality synthetic products with negative economic and clinical externalities.
Policy literature and implementation case studies summarized in the review that link regulatory gaps to investment risk and potential for low-quality product adoption; evidence is mostly inferential and descriptive.
medium negative On the use of synthetic data for healthcare AI in Africa: Te... investment levels, prevalence of low-quality products, clinical/economic externa...
Without improvements in fidelity and domain adaptation, synthetic data risks introducing bias and limiting clinical and economic benefits.
Integrated assessment from machine-learning evaluations, clinical validation studies, and implementation analyses within the review which link fidelity and domain mismatch to biased model outputs and reduced clinical utility; economic implications are inferred from cost-effectiveness and procurement literature cited in the review.
medium negative On the use of synthetic data for healthcare AI in Africa: Te... distributional bias, clinical utility (e.g., diagnostic accuracy, decision impac...
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
medium negative Framework for Government Policy on Agentic and Generative AI... workflow burden / frequency of appeals / process failures
Failing to retrain health workers for AI will produce structural labor-market mismatches, slow adoption, and reduce realized economic benefits.
Labor-market analysis and workforce readiness findings from the narrative synthesis and Delphi inputs; argument is inferential based on observed skill gaps and adoption barriers in the reviewed literature.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... adoption rates of AI tools, productivity gains, workforce skill alignment metric...
Indonesia risks technological dependency on foreign vendors if domestic capability, data governance, and procurement are not strengthened.
Market and policy assessment from the review, including procurement analyses and discussion in supplementary national reports and Delphi studies; based on observed market structures and procurement practices identified in the literature.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... degree of market reliance on foreign AI vendors / domestic market share
Approximately 58.7% of the relevant Indonesian health workforce lacks the AI competence or literacy needed for safe, scalable adoption.
Workforce readiness estimate derived from reviewed workforce assessments, Delphi consensus studies, and national reports included in the narrative synthesis; the summary does not specify sample frames or exact survey instruments that produced the 58.7% figure.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... percent of health workforce lacking AI competence/literacy
Indonesia’s AI healthcare maturity score is approximately 52/100, trailing regional peers (example comparators: Singapore ≈ 92, Malaysia ≈ 78).
Benchmarking performed in the review against regional maturity catalogues and international standards (EU AI Act, Singapore, Australia); maturity scoring method referenced in the paper but detailed scoring rubric and underlying metrics not fully reproduced in the summary.
medium negative Artificial Intelligence in Healthcare in Indonesia: Are We R... composite AI-health system maturity score (0–100)
Widespread adoption of LLMs without adequate verification increases systemic cybersecurity risks with potential economic spillovers.
Synthesis of security incident case studies and risk analyses revealing vulnerabilities in generated code and potential downstream impacts.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... frequency/severity of security breaches attributable to AI-generated code; downs...
Models lack deep contextual reasoning and may fail on tasks requiring long-term design thinking or deep domain knowledge.
Benchmark failures and user studies in the reviewed literature demonstrating degraded performance on complex architectural/design tasks and domain-specific reasoning problems.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... task success on long-horizon design tasks, reasoning/chain-of-thought benchmark ...
Use of these tools can mask gaps in foundational computational skills among novices.
Pedagogical case studies and assessments indicating reliance on AI can produce superficial solutions and lower demonstrated understanding of core concepts.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... measures of foundational skill (conceptual quiz scores, ability to solve novel/u...
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
medium negative Ethical and societal challenges to the adoption of generativ... existence of externalities and scope for public policy interventions
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
medium negative Ethical and societal challenges to the adoption of generativ... compliance costs, market entry barriers, diffusion rates
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
medium negative Ethical and societal challenges to the adoption of generativ... attention per item, willingness-to-pay, content monetization
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration, vertical integration, strategic advantage for data/infrast...
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... social-welfare losses from misinformation and trust erosion
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration (e.g., HHI), entry rates, and barriers to entry
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
medium negative Ethical and societal challenges to the adoption of generativ... degree of legal fragmentation across jurisdictions (differences in statutes, enf...
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration of model capabilities and levels of governance/transparency
Key data challenges in African contexts are measurement error, censoring, selection bias (informal actors absent from official datasets), privacy/ethical concerns, and limited digital trace coverage in some regions.
Methodological critique synthesised from literature in the paper.
medium negative Continental shift: operations and supply chain management re... threats to data quality and representativeness for empirical studies
Key constraints on realized gains include governance complexity, model reliability limits (errors, brittleness, distribution shifts), orchestration challenges integrating agents across systems, and ongoing need for human oversight for safety, fairness, and quality control.
Qualitative observations and limitations reported from the Alfred AI deployments and authors' analysis of operational experience; evidence comes from live deployments but is descriptive rather than quantitative.
medium negative Artificial Intelligence Agents in Knowledge Work: Transformi... presence and impact of governance complexity, model errors, orchestration diffic...
Data‑driven agritech platforms exhibit network effects and potential for market power, implying a policy need for data portability and interoperability to preserve competition.
Economic reasoning, policy reports, and case study examples summarized in the review; the claim is grounded in market analysis rather than large‑scale causal studies.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION market concentration, barriers to entry, interoperability metrics
If left unregulated and untargeted, AI and digital agritech platforms risk concentrating surplus with technology providers and capital owners, potentially increasing rural inequality and weakening smallholder bargaining power.
Theoretical market‑structure analysis, case studies of platform markets, and policy analyses cited in the paper; empirical causal evidence on long‑run distributional effects is limited.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION distribution of surplus/value capture, measures of rural inequality, smallholder...
Data ownership, lack of interoperability, privacy concerns, and concentration of digital agritech platforms create risks for competition and equitable value capture in agricultural value chains.
Policy reports, market analyses, and case studies discussed in the paper; the claim is supported by descriptive evidence and theoretical assessments rather than large causal estimates.
medium negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION market concentration, distribution of surplus/value capture, competition indicat...
Accumulated latent defects from unchecked AI outputs create negative externalities across dependent systems, complicating pricing and insurance; liability and cyber insurance markets may need to adapt.
Policy and economics argumentation drawing on externality theory; no actuarial or insurance-market empirical analysis provided.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... incidence and cost of third-party harms attributable to AI-originated defects, i...
Measured productivity gains from AI-assisted development may overstate welfare gains if verification costs, defect externalities, and long-run fragility are omitted from accounting.
Economic reasoning and accounting argument; no empirical accounting studies or welfare analyses presented.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... net productivity/welfare (productivity gains minus verification and remediation ...
The harm from latent defects is diffuse and slow-moving, making it easy for decision-makers to underweight these risks in adoption choices.
Descriptive argument drawing on behavioral economics concepts (discounting, salience); no empirical decision-making data included.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... time-discounted valuation of future incident costs by decision-makers; observed ...
Small, unverified changes accumulate over time into system-level fragility, hidden bugs, and security vulnerabilities (latent risk accumulation).
Causal reasoning and illustrative examples; no longitudinal empirical measurement of defect accumulation presented.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... rate of latent defects/vulnerabilities per release over time; system fragility i...
AI-assisted code generation produces a throughput asymmetry: generation capacity rises much faster than human or automated verification capacity.
Synthesis of conceptual arguments and illustrative scenarios; no quantitative empirical evidence or sample-based analysis included in the paper.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... relative growth rates of generation capacity vs verification capacity (generatio...
Verification (human review, testing, security analysis) does not scale at the same rate as AI-assisted generation and becomes the bottleneck.
Mechanism reasoning and qualitative argumentation; illustrative examples showing mismatch between generation and verification capacity. No empirical scaling measurements provided.
medium negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... verification throughput (e.g., reviews/tests/sec, reviewer-hours per generated a...
Overreliance on generative AI risks eroding worker critical thinking and loss of tacit expertise.
Conceptual arguments supported by observational reports and theoretical concerns in the literature synthesis; limited empirical evidence cited.
medium negative The Use of ChatGPT in Business Productivity and Workflow Opt... measures of worker critical thinking, retention/loss of tacit skills, task profi...
Security vulnerabilities and IP leakage create negative externalities; absent internalization, social costs (breaches, legal disputes) may rise.
Security analyses, documented incidents, and economic externality reasoning synthesized from the literature; empirical quantification of social cost is limited.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... social costs from security breaches and IP disputes (incidence and severity)
Generated code may incidentally reproduce copyrighted or licensed snippets from training data.
Analyses detecting verbatim or near-verbatim reproductions of licensed/copyrighted code in model outputs in selected tests and audits; evidence heterogeneous and depends on prompts and model/data.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... frequency of reproduced copyrighted/licensed code in outputs
Outputs often lack deep, project-level contextual reasoning (e.g., design tradeoffs, architecture constraints).
Qualitative failure-mode analyses, user studies, and benchmark tasks showing limitations in system-level reasoning and context-aware design decisions; evidence from short-horizon labs and case studies.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... ability to produce context-appropriate architectural/design decisions
There is a risk of shallow learning if learners over-rely on AI outputs without understanding fundamentals.
Educational studies and observational analyses indicating reduced engagement with underlying concepts for some learners using AI assistance, plus qualitative reports from instructors; studies often short-term.
medium negative ChatGPT as a Tool for Programming Assistance and Code Develo... depth of conceptual understanding and learning outcomes
There is a significant political-economy risk that dominant states or firms (an "AI superpower" veto) could block or undermine coordination on token taxes.
Political-economy discussion identifying veto risks and possible deterrent mechanisms; conceptual argumentation without empirical probability estimates.
medium negative Token Taxes: mitigating AGI's economic risks risk of coordinated enforcement failure due to concentrated actor veto
FLOP taxes face measurement, enforceability, and leakage challenges and tax inputs rather than where value is realized.
Comparative critique presented in the paper; conceptual analysis without empirical measurement of FLOP-tax implementations.
medium negative Token Taxes: mitigating AGI's economic risks measurement difficulty, enforceability, leakage, and alignment of tax base with ...
Conversely, lack of standards or failed validation can create regulatory setbacks, reputational risk, and stranded R&D spending.
Case reports and regulatory analysis in the narrative review describing negative outcomes from failed validation or non-aligned AI tools (qualitative evidence).
medium negative Artificial Intelligence in Drug Discovery and Development: R... incidence of regulatory setbacks, reputational damage, amount of stranded/wasted...
Productivity gains from deploying agentic AI may be overstated if alignment costs, monitoring overhead, and coordination inefficiencies are ignored.
Conceptual economic accounting argument; recommends new accounting categories and empirical studies to quantify these factors.
medium negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... net productivity gains after accounting for alignment/monitoring costs
Agentic systems generate tail risks and endogenous systemic correlations (multiple systems converging on similar failure modes), creating new insurability challenges.
Theoretical risk analysis and analogy to systemic risk literature; proposed implications for insurance markets but no empirical testing.
medium negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... frequency/severity of tail events and systemic correlated failures among agentic...
Coordination and control mechanisms (hierarchies, protocols, monitoring) face scalability and specification problems when agents generate unforeseen actions.
Theoretical analysis and examples from multi-agent/organizational theory; no empirical measurement included.
medium negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... effectiveness/scalability of coordination and control mechanisms