The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (2469 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Org Design Remove filter
MLOps and governance provisions shift costs from one-off implementation to ongoing maintenance, implying recurring costs that should be captured in economic evaluations.
Analytical/economic argument presented in the paper as an implication of including an MLOps layer (conceptual; no empirical cost accounting provided).
medium negative ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... cost structure (recurring maintenance costs vs one-off implementation costs)
Adoption complementarities (AI tools + developer skill + organizational processes) favor larger incumbents and well‑funded firms, possibly increasing concentration in tech sectors.
Theoretical argument about complementarities and returns to scale; illustrative examples; lacks firm‑level empirical testing.
medium negative How AI Will Transform the Daily Life of a Techie within 5 Ye... market concentration measures (market share, concentration ratios) and different...
In the near term, displacement risks concentrate on junior or highly routine roles; mobility and retraining will determine realized unemployment impacts.
Task automatability mapping indicating routine tasks more automatable and qualitative reasoning on labor mobility; no empirical unemployment projections.
medium negative How AI Will Transform the Daily Life of a Techie within 5 Ye... employment outcomes for junior/highly routine roles (displacement rates, unemplo...
Adoption will be heterogeneous: larger firms and well‑resourced teams will capture more gains earlier, producing competitive advantages.
Theoretical argument about adoption complementarities (AI tools + developer skill + organizational processes) and illustrative examples; no cross‑firm empirical analysis.
medium negative How AI Will Transform the Daily Life of a Techie within 5 Ye... heterogeneity in productivity gains and market advantage by firm size/resource l...
Differential adoption across firms (due to modular, scalable designs and data advantages) may create winner‑takes‑most effects and increase market concentration, benefiting early adopters with rich data/integration capabilities.
Market-structure claim supported by economic reasoning about scale and data advantages; no cross-firm empirical adoption study or market concentration time‑series is provided.
medium negative Next-Generation Financial Analytics Frameworks for AI-Enable... market concentration metrics (e.g., HHI), firm market shares, adoption timing di...
Organizations will incur additional governance and procurement costs (diversity audits, recalibration of reward models, multi-model infrastructures) to mitigate homogenization, shifting some economic benefits of AI toward governance spending.
Cost implication argued from the need for auditing and multi-model procurement described in recommendations; not supported by quantified cost analyses in the paper.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... governance and procurement costs associated with LLM deployment
Inter-model convergence undermines product differentiation across AI providers and could accelerate commoditization of base LLM outputs.
Market-structure inference built on empirical finding of high cross-model output similarity across 70+ models and theoretical discussion of vendor differentiation; no market-level price or adoption time-series analyzed in the paper.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... vendor product differentiation / commoditization of base outputs
Homogenized AI outputs reduce the value of AI as a source of varied cognitive complements to human labor, potentially lowering productivity gains from human–AI collaboration in tasks requiring creativity and exploration.
Economic argument drawing on measured decreases in model output diversity and theoretical literature on complementarities between diverse AI outputs and human creativity; no direct measured productivity changes reported in field settings within the paper.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... productivity gains from human–AI collaboration (theoretical implication inferred...
Reward-model and evaluation miscalibration can cause organizations to prefer models that maximize apparent evaluation scores at the expense of useful stylistic or cognitive diversity.
Comparative analyses between automated evaluation/reward-model rankings and human preference/diversity assessments reported in the paper; examples where high-scoring models produced more consensus-style outputs.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... model selection bias driven by automated evaluation scores; reduction in diversi...
Homogenized outputs increase organizational susceptibility to groupthink and correlated errors across teams using different models.
Argument based on observed inter-model convergence (high similarity across models) implying correlated outputs and thus correlated mistakes across teams; no randomized organizational field experiment reported, this is an inferred risk from the empirical convergence data.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... risk of correlated errors / susceptibility to groupthink (conceptual risk inferr...
Homogenization of LLM outputs erodes creative diversity in AI-assisted work and reduces the variety of solutions produced.
Inference drawn from measured decreases in response diversity (entropy/distinct-n) and the observed inter-model convergence across real-world queries; argument linking lower measured diversity to fewer distinct solution proposals in AI-augmented workflows.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... creative diversity / number of distinct solution variants produced
Current reward models and automated evaluation metrics are biased toward consensus/high-probability responses, preferring consensus-style outputs even when stylistically diverse alternatives are judged equally high-quality by humans.
Reported human preference assessments and comparisons between human judgments and automated/reward-model scores showing cases where reward models favor higher-probability/consensus outputs despite no human-quality advantage; analyses described comparing reward-model scores to human judgments on stylistically diverse outputs.
medium negative The Artificial Hivemind: Rethinking Work Design and Leadersh... alignment between reward-model/automated evaluation scores and human quality jud...
Introducing ‘agent capital’ (AI that lowers coordination costs) reduces coordination costs inside firms (coordination compression).
Definition and central assumption of the paper's formal task-based model; analytical setup assumes agent capital parametrically reduces coordination frictions.
medium negative AI as Coordination-Compressing Capital: Task Reallocation, O... coordination costs (firm-internal coordination friction parameter)
Uneven inclusion in digital/AI deployments risks exacerbating digital divides and creating distributional harms.
Descriptive and case-based studies report differential access and uptake among demographic groups; limited causal quantification and varying measurement approaches across studies.
medium negative Digital Transformation and AI Adoption in Government: Evalua... service coverage across demographic groups, measures of digital divide (access, ...
Limited auditability and explainability of AI systems increase trust and legitimacy risks.
Technical governance literature and case reports show challenges in model explainability and external audit; evidence is technical and illustrative rather than based on large-sample causal studies.
medium negative Digital Transformation and AI Adoption in Government: Evalua... auditability metrics, transparency indicators, public trust measures
Inadequate regulatory frameworks raise privacy, accountability, and fairness concerns for AI in government.
Governance reviews and risk assessments documented in the literature highlight regulatory gaps and associated incidents/risks; empirical incident counts are not comprehensively tabulated in the review.
medium negative Digital Transformation and AI Adoption in Government: Evalua... privacy breaches, accountability/audit findings, measures of fairness/bias incid...
Procurement, budgeting rules, and siloed incentives discourage cross-cutting transformation and modular iterative deployments.
Policy and institutional analyses in the reviewed literature point to rigid procurement cycles, capital budgeting practices, and siloed funding as obstacles; examples and case narratives are provided but systematic quantification is limited.
medium negative Digital Transformation and AI Adoption in Government: Evalua... frequency of modular/iterative procurements, number of cross-cutting projects fu...
Organizational resistance and fragmented coordination block integrated rollouts of cross-cutting digital reforms.
Qualitative case studies and governance analyses repeatedly identify intra-governmental silos, conflicting incentives, and change-resistance as implementation barriers; evidence is primarily descriptive.
medium negative Digital Transformation and AI Adoption in Government: Evalua... degree of cross-agency integration, completion rates of integrated projects, imp...
Skills shortages (technical, managerial, data literacy) impede adoption and maintenance of digital and AI systems.
Multiple surveys, policy briefs and qualitative studies cited in the review report workforce capacity gaps; often based on targeted assessments or organizational audits rather than representative sampling.
medium negative Digital Transformation and AI Adoption in Government: Evalua... adoption rates, system maintenance capacity, time-to-value for deployments
Infrastructure deficits (connectivity, legacy systems) limit scale and reliability of digital/AI initiatives.
Recurring barrier documented across governance analyses and case studies; evidence includes reports of downtime, integration failures, and limited geographic reach; no unified cross-study sample provided.
medium negative Digital Transformation and AI Adoption in Government: Evalua... system reliability/uptime, scalability, geographic/service coverage
Scalability and rapid model improvements provided by cloud vendors are harder to capture on-premise.
Comparative discussion in TOE analysis about vendor-managed continuous model improvements and cloud scalability versus on-prem constraints; not backed by longitudinal empirical comparison in the summary.
medium negative An Empirical Study on the Feasibility Analysis of On-Premise... ability to capture rapid model improvements and scalability
Loss of control over research data impedes local capture of value (knowledge, IP, downstream services) and can create externalities when data are repurposed or commercialized without equitable benefit sharing.
Conceptual argument grounded in case observations about data flows and provider practices; no quantitative measures of value capture provided.
medium negative Emerging ethical duties in AI-mediated research: A case of d... local value capture; intellectual property and benefit sharing
Dominant AI/cloud providers become de facto gatekeepers of data processing and storage; researchers and institutions, particularly in lower‑capacity jurisdictions, have limited bargaining power to enforce data‑sovereignty or transparency terms.
Mapping of third‑party dependencies and interview/observational evidence of institutional procurement constraints in the Chile case; normative discussion of market power implications.
medium negative Emerging ethical duties in AI-mediated research: A case of d... bargaining power; market gatekeeping
Algorithmic opacity and cross‑border regulatory fragmentation raise monitoring, compliance, and contractual costs for collaborative research, effectively increasing the transaction costs of data‑intensive science.
Analytical inference from qualitative findings (opacity, legal fragmentation) and normative economic reasoning presented in the implications section; no quantitative transaction‑cost measurement reported.
medium negative Emerging ethical duties in AI-mediated research: A case of d... transaction costs; monitoring and compliance costs
Inequalities in infrastructure (local compute, storage, institutional procurement power) amplify these problems: researchers in weaker jurisdictions face higher risks and fewer mitigation options.
Case study observations about local infrastructure capacity, procurement practices, and institutional constraints in Chile; qualitative reports of limited mitigation choices.
medium negative Emerging ethical duties in AI-mediated research: A case of d... risk exposure and available mitigation options by jurisdiction/institutional cap...
Rather than shifting liability away from researchers, AI systems increase researchers' ethical responsibilities: researchers must assess third‑party tools, negotiate data flows, and manage risks despite having limited contractual leverage.
Qualitative interviews and institutional observations reporting researchers' roles in assessing tools and managing data flows; normative analysis of accountability responsibilities in the case study.
medium negative Emerging ethical duties in AI-mediated research: A case of d... researcher responsibility/liability burden
Algorithmic opacity (hidden models, undocumented data flows, proprietary cloud stacks) reduces researchers' ability to control or even know how participant data are used, transferred, or monetized.
Interview data and mapping of third‑party dependencies showing opaque provider practices and limited transparency about model/data flows in the Chile case study.
medium negative Emerging ethical duties in AI-mediated research: A case of d... researcher control over data use/transfer/monetization
Everyday AI services used in research introduce new, diffuse points of data capture and processing that complicate informed consent and privacy management.
Observations and documented mappings of tool use and data flows (e.g., transcription services, cloud platforms, meeting assistants) reported in the case study; supported by qualitative interviews with researchers/administrators.
medium negative Emerging ethical duties in AI-mediated research: A case of d... informed consent processes; privacy management
AI tools embedded in everyday research infrastructures intensify — rather than reduce — ethical accountability burdens: they constrain researcher autonomy and undermine data sovereignty, especially in cross‑national settings where legal protections are fragmented or weaker.
Qualitative case study centered on environmental science research in Chile that uses GDPR as a normative framework; methods reported include interviews, observation, and mapping of data flows and third‑party dependencies (sample sizes not reported).
medium negative Emerging ethical duties in AI-mediated research: A case of d... ethical accountability burden; researcher autonomy; data sovereignty
Insufficient regulation increases risks of negative externalities (privacy harms, biased hiring/management) that can reduce labor supply attachment or lower human capital investments.
Theoretical reasoning and synthesis of documented case studies and reports referenced in the commentary; not supported by new causal empirical analysis in the paper.
medium negative AI governance under the second Trump administration: implica... privacy harms; biased hiring/management; labor supply attachment; human capital ...
Absent strong worker voice or mandated impact assessments, AI-driven surveillance, algorithmic management and task reallocation are more likely, increasing risks of deskilling, displacement, and discriminatory outcomes.
Policy synthesis identifying plausible channels from AI system use to worker harms; supported by case-study reports in the symposium but no systematic empirical quantification in this commentary.
medium negative AI governance under the second Trump administration: implica... incidence of surveillance and algorithmic management; worker outcomes (deskillin...
Weakening of organized labor and stalled worker-protection legislation raises the probability that AI adoption will increase employer bargaining power, potentially depressing wages and worsening job quality for affected occupations.
Analytic inference from labor economics theory and policy review; commentary does not present causal microdata linking AI adoption to wage or job-quality outcomes.
medium negative AI governance under the second Trump administration: implica... employer bargaining power; wages; job quality in affected occupations
Export controls may constrain access to advanced models and hardware, affecting productivity gains unevenly across firms and sectors.
Policy analysis of current export control instruments and their potential economic effects; no firm- or sector-level quantitative analysis presented.
medium negative AI governance under the second Trump administration: implica... access to advanced AI models/hardware; sectoral/productivity gains
A conservative Supreme Court majority increases the risk of rulings that could further constrain organized labor and weaken labor’s power to negotiate AI-related workplace rules.
Legal analysis connecting Supreme Court composition and recent jurisprudence to possible effects on labor law and collective bargaining; predictive inference rather than empirical testing.
medium negative AI governance under the second Trump administration: implica... legal constraints on organized labor’s bargaining power (court rulings affecting...
The incoming second Trump administration is dismantling many Biden-era worker-protection initiatives (notably rescinding or undercutting the Biden Executive Order intended to hold employers accountable for AI impacts).
Policy/legal analysis referencing recent executive actions and reported rollbacks of Biden-era frameworks; synthesis of documents and news/administrative actions reviewed in the commentary; no original empirical sample.
medium negative AI governance under the second Trump administration: implica... existence and scope of executive-order-based worker-protection initiatives
The DoD acquisition workforce is shrinking (through retirements, buyouts, reductions in force), reducing institutional knowledge and the discretionary capacity needed to exercise the memo's expectations responsibly.
Institutional trend evidence: assessment of publicly reported and internal staffing trends (reports of retirements, voluntary buyouts, reductions in force). No precise headcount, rate, or sample size provided in the analysis; described as a documented declining acquisition workforce.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... size and capacity of the acquisition workforce; loss of institutional expertise
Mandated 'any lawful use' contract language shifts risk-management responsibilities toward the government, reducing contractors' incentives to constrain misuse and increasing government residual legal/operational exposure.
Primary source analysis of required contract language in the memo and contracting directives, combined with conceptual principal–agent and moral-hazard assessment (risk/scenario modeling). No empirical measurement of incentive changes provided.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... allocation of legal/operational risk between contractors and government; inferre...
The memo departs from the Department's prior lifecycle-assurance framework and substitutes different standards while elevating vague criteria (e.g., 'model objectivity') without operational definitions or evaluation methods.
Primary source comparison: close reading of the January 2026 memo versus prior DoD lifecycle-assurance documents; identification of new/changed terminology and lack of accompanying operational definitions or test methods in the policy text.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... clarity and operationalization of procurement standards (presence/absence of def...
By centralizing waiver decisions in a Barrier Removal Board, the memo converts baseline governance controls into exception-driven permissions (i.e., governance becomes something to be suspended rather than enforced).
Qualitative institutional analysis and primary-source reading of the memo establishing a centralized waiver process; mapping of how waiver mechanisms interact with existing assurance processes (ATO, T&E, contracting). No quantitative measurement of waiver frequency provided.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... status of governance controls (baseline enforcement vs. exception/waiver-driven)
The memo explicitly frames governance and procurement speed as a zero-sum tradeoff and labels long-standing oversight mechanisms (Authorities to Operate, test & evaluation, contracting reviews) as 'blockers' eligible for waiver.
Primary source analysis: textual interpretation of the memo and accompanying contracting directives that characterize oversight mechanisms as impediments and make them eligible for waiver. Evidence is documentary (policy text); no quantitative sample.
medium negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... framing of governance vs. speed in policy language; designation of specific over...
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
medium negative Framework for Government Policy on Agentic and Generative AI... workflow burden / frequency of appeals / process failures
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
medium negative Ethical and societal challenges to the adoption of generativ... existence of externalities and scope for public policy interventions
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
medium negative Ethical and societal challenges to the adoption of generativ... compliance costs, market entry barriers, diffusion rates
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
medium negative Ethical and societal challenges to the adoption of generativ... attention per item, willingness-to-pay, content monetization
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration, vertical integration, strategic advantage for data/infrast...
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... social-welfare losses from misinformation and trust erosion
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration (e.g., HHI), entry rates, and barriers to entry
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
medium negative Ethical and societal challenges to the adoption of generativ... degree of legal fragmentation across jurisdictions (differences in statutes, enf...
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
medium negative Ethical and societal challenges to the adoption of generativ... market concentration of model capabilities and levels of governance/transparency
Key constraints on realized gains include governance complexity, model reliability limits (errors, brittleness, distribution shifts), orchestration challenges integrating agents across systems, and ongoing need for human oversight for safety, fairness, and quality control.
Qualitative observations and limitations reported from the Alfred AI deployments and authors' analysis of operational experience; evidence comes from live deployments but is descriptive rather than quantitative.
medium negative Artificial Intelligence Agents in Knowledge Work: Transformi... presence and impact of governance complexity, model errors, orchestration diffic...