The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4333 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Governance Remove filter
Homogenized outputs increase organizational susceptibility to groupthink and correlated errors across teams using different models.
Argument based on observed inter-model convergence (high similarity across models) implying correlated outputs and thus correlated mistakes across teams; no randomized organizational field experiment reported, this is an inferred risk from the empirical convergence data.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... risk of correlated errors / susceptibility to groupthink (conceptual risk inferr...
Homogenization of LLM outputs erodes creative diversity in AI-assisted work and reduces the variety of solutions produced.
Inference drawn from measured decreases in response diversity (entropy/distinct-n) and the observed inter-model convergence across real-world queries; argument linking lower measured diversity to fewer distinct solution proposals in AI-augmented workflows.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... creative diversity / number of distinct solution variants produced
Current reward models and automated evaluation metrics are biased toward consensus/high-probability responses, preferring consensus-style outputs even when stylistically diverse alternatives are judged equally high-quality by humans.
Reported human preference assessments and comparisons between human judgments and automated/reward-model scores showing cases where reward models favor higher-probability/consensus outputs despite no human-quality advantage; analyses described comparing reward-model scores to human judgments on stylistically diverse outputs.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... alignment between reward-model/automated evaluation scores and human quality jud...
Uneven inclusion in digital/AI deployments risks exacerbating digital divides and creating distributional harms.
Descriptive and case-based studies report differential access and uptake among demographic groups; limited causal quantification and varying measurement approaches across studies.
medium negative Digital Transformation and AI Adoption in Government: Evalua... service coverage across demographic groups, measures of digital divide (access, ...
Limited auditability and explainability of AI systems increase trust and legitimacy risks.
Technical governance literature and case reports show challenges in model explainability and external audit; evidence is technical and illustrative rather than based on large-sample causal studies.
medium negative Digital Transformation and AI Adoption in Government: Evalua... auditability metrics, transparency indicators, public trust measures
Inadequate regulatory frameworks raise privacy, accountability, and fairness concerns for AI in government.
Governance reviews and risk assessments documented in the literature highlight regulatory gaps and associated incidents/risks; empirical incident counts are not comprehensively tabulated in the review.
medium negative Digital Transformation and AI Adoption in Government: Evalua... privacy breaches, accountability/audit findings, measures of fairness/bias incid...
Procurement, budgeting rules, and siloed incentives discourage cross-cutting transformation and modular iterative deployments.
Policy and institutional analyses in the reviewed literature point to rigid procurement cycles, capital budgeting practices, and siloed funding as obstacles; examples and case narratives are provided but systematic quantification is limited.
medium negative Digital Transformation and AI Adoption in Government: Evalua... frequency of modular/iterative procurements, number of cross-cutting projects fu...
Organizational resistance and fragmented coordination block integrated rollouts of cross-cutting digital reforms.
Qualitative case studies and governance analyses repeatedly identify intra-governmental silos, conflicting incentives, and change-resistance as implementation barriers; evidence is primarily descriptive.
medium negative Digital Transformation and AI Adoption in Government: Evalua... degree of cross-agency integration, completion rates of integrated projects, imp...
Skills shortages (technical, managerial, data literacy) impede adoption and maintenance of digital and AI systems.
Multiple surveys, policy briefs and qualitative studies cited in the review report workforce capacity gaps; often based on targeted assessments or organizational audits rather than representative sampling.
medium negative Digital Transformation and AI Adoption in Government: Evalua... adoption rates, system maintenance capacity, time-to-value for deployments
Infrastructure deficits (connectivity, legacy systems) limit scale and reliability of digital/AI initiatives.
Recurring barrier documented across governance analyses and case studies; evidence includes reports of downtime, integration failures, and limited geographic reach; no unified cross-study sample provided.
medium negative Digital Transformation and AI Adoption in Government: Evalua... system reliability/uptime, scalability, geographic/service coverage
Unresolved liability and regulatory uncertainty increase malpractice risk and insurance costs, leading insurers and providers to favor conservative adoption and continued human-in-the-loop safeguards.
Regulatory/legal analysis and stakeholder behavior models discussed in the review; observed cautious deployment patterns in practice noted in the literature.
medium negative Will AI Replace Physicians in the Near Future? AI Adoption B... malpractice risk; insurance premiums; adoption conservatism; presence of human-i...
Regulatory pathways and approval standards are evolving but are not yet aligned with deployment of high-autonomy clinical systems.
Review of recent policy analyses and regulatory documents showing ongoing updates and gaps between current standards and requirements for high-autonomy AI deployment.
medium negative Will AI Replace Physicians in the Near Future? AI Adoption B... alignment between regulatory frameworks and high-autonomy clinical deployment re...
Robust, locally appropriate data governance (privacy, interoperability, standards) is a public good that underpins trust and data-driven markets; weak governance raises risks of exclusion and foreign dependency.
Governance and policy literature synthesized in the review; conceptual arguments supported by examples but limited empirical evaluation in LMIC SME contexts.
medium negative Artificial Intelligence Adoption for Sustainable Development... data governance robustness; SME inclusion in data-driven markets; foreign depend...
Platform effects and supplier ecosystems associated with AI may create winner-takes-most market dynamics, so policy should monitor market concentration and enable competitive access to core AI services.
Literature on platforms and market structure combined with case examples; review notes potential for concentration but lacks broad causal studies quantifying effects in LMIC SME markets.
medium negative Artificial Intelligence Adoption for Sustainable Development... market concentration metrics; access to core AI services by SMEs
Fragmented or weak data governance (privacy rules, standards, interoperability, and trust) reduces SMEs’ ability to participate in data-driven markets and adopt AI.
Policy analyses and governance-focused studies in the review highlighting data governance weaknesses in LMICs and associated risks for SMEs; examples discussed rather than quantified nationally.
medium negative Artificial Intelligence Adoption for Sustainable Development... data governance quality; SME participation in data markets; trust/interoperabili...
Sanctions and supply-chain restrictions affect access to hardware and software, altering adoption paths and increasing costs; domestic substitution or international cooperation will influence future trajectories.
Institutional analysis documenting sanctions/import restrictions and their implications for hardware/software access; qualitative assessment of substitution and cooperation options.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... availability and cost of hardware/software inputs for AI and resulting adoption ...
The barriers to AI adoption in Russia’s extractive industries interact systemically (e.g., lack of data reduces demand for talent; weak infrastructure deters investment), so piecemeal measures will have limited effect.
Analytical synthesis identifying co-moving constraints across cross-country trends and qualitative firm-level evidence showing interacting bottlenecks.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... overall effectiveness of isolated vs. coordinated interventions on AI diffusion ...
Institutional failures—weak standards/interoperability, limited public–private coordination, regulatory uncertainty, and sanctions/import restrictions—exacerbate diffusion problems for AI in extractive sectors.
Institutional review of standards, procurement and public–private coordination mechanisms; documentation of regulatory uncertainty and sanctions/import restrictions affecting hardware/software access.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... standards/interoperability quality, level of public–private coordination, regula...
Infrastructure shortfalls — insufficient sensorization, limited connectivity (edge/cloud), inadequate computing hardware and immature localized software stacks — are underdeveloped in Russia relative to peers and hinder deployment.
ICT infrastructure indicators, comparative metrics on sensorization/connectivity/computing availability, and project case evidence from extractive firms.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... sensor density, connectivity quality (edge/cloud readiness), availability of com...
There are human capital constraints: shortages of AI talent in industry-specific roles, limited retraining of engineering staff, and brain drain reduce the sector's capacity to absorb and deploy AI.
Workforce and education statistics, patent/activity counts, and expert commentary; qualitative case evidence showing limited retraining and talent shortages in industry-specific AI roles.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... industry-specific AI talent supply, retraining rates for engineering staff, meas...
Absolute and relative AI investment volumes in the Russian extractive sector are lower than in the US, China and EU; private risk capital is limited and public support insufficiently targeted to scale-up projects.
Investment datasets and national/industry statistics comparing public and private AI investment volumes (absolute and relative to output) for extractive sectors across jurisdictions (2020–2025).
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... AI investment volumes (absolute and per unit of extractive output); availability...
Data access is a primary bottleneck: datasets are fragmented, often proprietary or closed, ownership rules are unclear, and mechanisms for safe data sharing are weak, hindering model training and cross-firm applications.
Review of data governance frameworks across jurisdictions and firm-level case evidence documenting closed/proprietary datasets and weak sharing mechanisms.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... availability and usability of industrial data for AI model training and cross-fi...
The gap is driven not only by smaller investment flows but also by institutional constraints—limited data access, weak data governance, human capital shortages, and inadequate digital infrastructure—that together suppress diffusion and scaling of AI applications.
Institutional analysis (review of data governance frameworks, regulatory regimes, standards, market structure) plus qualitative firm-level case studies and expert commentary illustrating how these factors impede adoption and scaling.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... diffusion and scaling of AI applications in extractive industries
Russia’s adoption of AI in extractive industries is both slower (lower growth rate) and shallower (lower depth of digitalization) than peer jurisdictions in 2020–2025.
Time-series comparison of digitalization/digit maturity proxies and AI investment volumes across countries for 2020–2025; synthesis of trend differences from public datasets and sectoral indices.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... rate of change in digitalization indicators and depth of digitalization (digit m...
Between 2020–2025 Russia trails the United States, China and the EU on both digitalization indicators and AI investment volumes in the mining and oil & gas sectors.
Comparative multi-country trend analysis (2020–2025) using publicly available investment and digitalization indicators: national/industry statistics, investment datasets, and sectoral digitalization indices comparing Russia, US, China and EU over 2020–2025.
medium negative ADOPTION OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN EXTRACTIV... digitalization levels and AI investment volumes per unit of extractive output (m...
Loss of control over research data impedes local capture of value (knowledge, IP, downstream services) and can create externalities when data are repurposed or commercialized without equitable benefit sharing.
Conceptual argument grounded in case observations about data flows and provider practices; no quantitative measures of value capture provided.
medium negative Emerging ethical duties in AI-mediated research: A case of d... local value capture; intellectual property and benefit sharing
Dominant AI/cloud providers become de facto gatekeepers of data processing and storage; researchers and institutions, particularly in lower‑capacity jurisdictions, have limited bargaining power to enforce data‑sovereignty or transparency terms.
Mapping of third‑party dependencies and interview/observational evidence of institutional procurement constraints in the Chile case; normative discussion of market power implications.
medium negative Emerging ethical duties in AI-mediated research: A case of d... bargaining power; market gatekeeping
Algorithmic opacity and cross‑border regulatory fragmentation raise monitoring, compliance, and contractual costs for collaborative research, effectively increasing the transaction costs of data‑intensive science.
Analytical inference from qualitative findings (opacity, legal fragmentation) and normative economic reasoning presented in the implications section; no quantitative transaction‑cost measurement reported.
medium negative Emerging ethical duties in AI-mediated research: A case of d... transaction costs; monitoring and compliance costs
Inequalities in infrastructure (local compute, storage, institutional procurement power) amplify these problems: researchers in weaker jurisdictions face higher risks and fewer mitigation options.
Case study observations about local infrastructure capacity, procurement practices, and institutional constraints in Chile; qualitative reports of limited mitigation choices.
medium negative Emerging ethical duties in AI-mediated research: A case of d... risk exposure and available mitigation options by jurisdiction/institutional cap...
Rather than shifting liability away from researchers, AI systems increase researchers' ethical responsibilities: researchers must assess third‑party tools, negotiate data flows, and manage risks despite having limited contractual leverage.
Qualitative interviews and institutional observations reporting researchers' roles in assessing tools and managing data flows; normative analysis of accountability responsibilities in the case study.
medium negative Emerging ethical duties in AI-mediated research: A case of d... researcher responsibility/liability burden
Algorithmic opacity (hidden models, undocumented data flows, proprietary cloud stacks) reduces researchers' ability to control or even know how participant data are used, transferred, or monetized.
Interview data and mapping of third‑party dependencies showing opaque provider practices and limited transparency about model/data flows in the Chile case study.
medium negative Emerging ethical duties in AI-mediated research: A case of d... researcher control over data use/transfer/monetization
Everyday AI services used in research introduce new, diffuse points of data capture and processing that complicate informed consent and privacy management.
Observations and documented mappings of tool use and data flows (e.g., transcription services, cloud platforms, meeting assistants) reported in the case study; supported by qualitative interviews with researchers/administrators.
medium negative Emerging ethical duties in AI-mediated research: A case of d... informed consent processes; privacy management
AI tools embedded in everyday research infrastructures intensify — rather than reduce — ethical accountability burdens: they constrain researcher autonomy and undermine data sovereignty, especially in cross‑national settings where legal protections are fragmented or weaker.
Qualitative case study centered on environmental science research in Chile that uses GDPR as a normative framework; methods reported include interviews, observation, and mapping of data flows and third‑party dependencies (sample sizes not reported).
medium negative Emerging ethical duties in AI-mediated research: A case of d... ethical accountability burden; researcher autonomy; data sovereignty
Insufficient regulation increases risks of negative externalities (privacy harms, biased hiring/management) that can reduce labor supply attachment or lower human capital investments.
Theoretical reasoning and synthesis of documented case studies and reports referenced in the commentary; not supported by new causal empirical analysis in the paper.
medium negative AI governance under the second Trump administration: implica... privacy harms; biased hiring/management; labor supply attachment; human capital ...
Absent strong worker voice or mandated impact assessments, AI-driven surveillance, algorithmic management and task reallocation are more likely, increasing risks of deskilling, displacement, and discriminatory outcomes.
Policy synthesis identifying plausible channels from AI system use to worker harms; supported by case-study reports in the symposium but no systematic empirical quantification in this commentary.
medium negative AI governance under the second Trump administration: implica... incidence of surveillance and algorithmic management; worker outcomes (deskillin...
Weakening of organized labor and stalled worker-protection legislation raises the probability that AI adoption will increase employer bargaining power, potentially depressing wages and worsening job quality for affected occupations.
Analytic inference from labor economics theory and policy review; commentary does not present causal microdata linking AI adoption to wage or job-quality outcomes.
medium negative AI governance under the second Trump administration: implica... employer bargaining power; wages; job quality in affected occupations
Export controls may constrain access to advanced models and hardware, affecting productivity gains unevenly across firms and sectors.
Policy analysis of current export control instruments and their potential economic effects; no firm- or sector-level quantitative analysis presented.
medium negative AI governance under the second Trump administration: implica... access to advanced AI models/hardware; sectoral/productivity gains
A conservative Supreme Court majority increases the risk of rulings that could further constrain organized labor and weaken labor’s power to negotiate AI-related workplace rules.
Legal analysis connecting Supreme Court composition and recent jurisprudence to possible effects on labor law and collective bargaining; predictive inference rather than empirical testing.
medium negative AI governance under the second Trump administration: implica... legal constraints on organized labor’s bargaining power (court rulings affecting...
The incoming second Trump administration is dismantling many Biden-era worker-protection initiatives (notably rescinding or undercutting the Biden Executive Order intended to hold employers accountable for AI impacts).
Policy/legal analysis referencing recent executive actions and reported rollbacks of Biden-era frameworks; synthesis of documents and news/administrative actions reviewed in the commentary; no original empirical sample.
medium negative AI governance under the second Trump administration: implica... existence and scope of executive-order-based worker-protection initiatives
The DoD acquisition workforce is shrinking (through retirements, buyouts, reductions in force), reducing institutional knowledge and the discretionary capacity needed to exercise the memo's expectations responsibly.
Institutional trend evidence: assessment of publicly reported and internal staffing trends (reports of retirements, voluntary buyouts, reductions in force). No precise headcount, rate, or sample size provided in the analysis; described as a documented declining acquisition workforce.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... size and capacity of the acquisition workforce; loss of institutional expertise
Mandated 'any lawful use' contract language shifts risk-management responsibilities toward the government, reducing contractors' incentives to constrain misuse and increasing government residual legal/operational exposure.
Primary source analysis of required contract language in the memo and contracting directives, combined with conceptual principal–agent and moral-hazard assessment (risk/scenario modeling). No empirical measurement of incentive changes provided.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... allocation of legal/operational risk between contractors and government; inferre...
The memo departs from the Department's prior lifecycle-assurance framework and substitutes different standards while elevating vague criteria (e.g., 'model objectivity') without operational definitions or evaluation methods.
Primary source comparison: close reading of the January 2026 memo versus prior DoD lifecycle-assurance documents; identification of new/changed terminology and lack of accompanying operational definitions or test methods in the policy text.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... clarity and operationalization of procurement standards (presence/absence of def...
By centralizing waiver decisions in a Barrier Removal Board, the memo converts baseline governance controls into exception-driven permissions (i.e., governance becomes something to be suspended rather than enforced).
Qualitative institutional analysis and primary-source reading of the memo establishing a centralized waiver process; mapping of how waiver mechanisms interact with existing assurance processes (ATO, T&E, contracting). No quantitative measurement of waiver frequency provided.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... status of governance controls (baseline enforcement vs. exception/waiver-driven)
The memo explicitly frames governance and procurement speed as a zero-sum tradeoff and labels long-standing oversight mechanisms (Authorities to Operate, test & evaluation, contracting reviews) as 'blockers' eligible for waiver.
Primary source analysis: textual interpretation of the memo and accompanying contracting directives that characterize oversight mechanisms as impediments and make them eligible for waiver. Evidence is documentary (policy text); no quantitative sample.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... framing of governance vs. speed in policy language; designation of specific over...
Regulatory fragmentation increases compliance costs and stifles cross-border scale economies; international coordination and mutual recognition of standards can lower trade costs.
Comparative governance analysis and economic reasoning about cross-border trade and compliance; no cross-country causal estimates provided in the report.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... compliance costs, cross-border scale economies, trade costs
Large incumbents with data/network advantages may entrench market power.
Policy and literature review noting data/network effects, observed tendencies in tech markets; sectoral examples discussed in the report.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... market power metrics, entry barriers, data advantage effects
Without targeted policy, AI can amplify winner-take-all dynamics (market concentration, superstar firms) and spatial inequalities (urban vs. rural).
Theoretical economic arguments and review of literature on data/network effects and concentration; comparative policy analysis that raises distributional concerns.
medium negative AI Governance and Data Privacy: Comparative Analysis of U.S.... market concentration, firm market shares, spatial inequality indicators
Without international coordination, providers may relocate compute or obscure compute locations to avoid stricter regimes; harmonized rules reduce these distortions.
Regulatory mapping and economic reasoning about geographic investment, regulatory arbitrage, and compute-location disclosure incentives.
medium negative The Global Landscape of Environmental AI Regulation: From th... likelihood of compute relocation or obfuscation (probability or incidence) and e...
Compliance and reporting requirements will impose additional costs on firms, with small providers likely disproportionately affected unless rules are proportionate.
Policy analysis of compliance and transaction costs (qualitative assessment of administrative burden and scale effects).
medium negative The Global Landscape of Environmental AI Regulation: From th... incremental compliance/reporting costs and distributional impact across firm siz...
The facility-level focus and training-phase emphasis of current governance limit regulators' ability to monitor and mitigate the full environmental externalities of modern AI systems.
Synthesis of empirical findings on model/inference impacts combined with regulatory mapping showing gaps between impact locus and regulatory reach.
medium negative The Global Landscape of Environmental AI Regulation: From th... regulatory coverage gap (degree to which regulatory instruments capture model-le...