Evidence (8486 claims)
Adoption
5821 claims
Productivity
5033 claims
Governance
4561 claims
Human-AI Collaboration
3600 claims
Labor Markets
2749 claims
Innovation
2687 claims
Org Design
2648 claims
Skills & Training
2107 claims
Inequality
1429 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 440 | 117 | 68 | 507 | 1148 |
| Governance & Regulation | 458 | 216 | 125 | 67 | 883 |
| Research Productivity | 270 | 101 | 34 | 303 | 713 |
| Organizational Efficiency | 441 | 105 | 76 | 43 | 669 |
| Technology Adoption Rate | 346 | 130 | 76 | 45 | 602 |
| Firm Productivity | 322 | 38 | 72 | 13 | 450 |
| Output Quality | 272 | 75 | 27 | 30 | 404 |
| AI Safety & Ethics | 122 | 188 | 46 | 27 | 385 |
| Market Structure | 119 | 134 | 86 | 14 | 358 |
| Decision Quality | 182 | 79 | 41 | 20 | 326 |
| Fiscal & Macroeconomic | 95 | 58 | 34 | 22 | 216 |
| Employment Level | 78 | 37 | 80 | 9 | 206 |
| Skill Acquisition | 102 | 37 | 41 | 9 | 189 |
| Innovation Output | 124 | 12 | 26 | 13 | 176 |
| Firm Revenue | 99 | 37 | 24 | — | 160 |
| Consumer Welfare | 77 | 38 | 37 | 7 | 159 |
| Task Allocation | 93 | 17 | 36 | 8 | 156 |
| Inequality Measures | 29 | 81 | 33 | 6 | 149 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 92 | 8 | 4 | 3 | 107 |
| Error Rate | 45 | 53 | 6 | — | 104 |
| Worker Satisfaction | 48 | 36 | 12 | 8 | 104 |
| Training Effectiveness | 59 | 13 | 12 | 16 | 101 |
| Wages & Compensation | 56 | 16 | 20 | 5 | 97 |
| Team Performance | 50 | 13 | 15 | 8 | 87 |
| Automation Exposure | 28 | 29 | 12 | 7 | 79 |
| Job Displacement | 7 | 45 | 13 | — | 65 |
| Hiring & Recruitment | 40 | 4 | 7 | 3 | 54 |
| Developer Productivity | 38 | 4 | 4 | 3 | 49 |
| Social Protection | 22 | 12 | 7 | 2 | 43 |
| Creative Output | 17 | 8 | 6 | 1 | 32 |
| Skill Obsolescence | 3 | 25 | 2 | — | 30 |
| Labor Share of Income | 12 | 7 | 10 | — | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Regulators may impose reporting or certification requirements related to AI governance, and clear liability rules will influence contract design and pricing in AI service markets.
Policy projection informed by regulatory trends and the paper's argument about auditability needs; speculative with no legal/regulatory citations demonstrating imminent mandates.
Insurers may revise underwriting, raise premiums, or exclude certain AI-related exposures until risk assessments improve; new insurance products may emerge for AI governance failures.
Policy and market impact speculation based on perceived risk; no empirical insurer responses or underwriting data provided.
Firms will reallocate resources toward AI governance, monitoring tools, and skilled auditors (increasing compliance and labor costs), and demand for products/services (prompt-provenance tools, watermarking, AI forensic services, certified-safe LLMs) will rise.
Market/economic projection based on the identified threat and presumed demand for mitigations; speculative without market-data support in the paper.
Policy implication: policymakers seeking to balance openness and security should consider layered, adaptive instruments that can be tuned by sector or actor; economic analysis can help identify where centralized coordination yields scale economies versus where decentralized rights‑based approaches preserve competition and trust.
Normative policy recommendation extrapolated from the paper's comparative findings and theoretical framing; not tested empirically in the paper.
Increased liability risk and compliance costs could raise barriers to entry for startups and niche vendors and potentially consolidate market power among larger firms better able to absorb compliance overhead; alternatively, new markets could emerge for compliant, certified providers.
Economic reasoning about compliance costs and market structure (theoretical predictions), not supported by empirical industry data in the Article.
Demand for labor may shift from routine instrument operation and image processing toward higher-level tasks (experiment design, oversight, interpretation), and LLMs may amplify productivity of skilled scientists, potentially increasing wage premia for those who supervise AI-guided workflows.
Labor-economics reasoning and analogy to prior automation effects; no empirical labor-market or wage data presented specific to microscopy.
Adoption of Model Medicine practices would create new markets and roles (e.g., diagnostics, remediation services, 'model clinicians'), affect regulation, insurance, and procurement, and could shift R&D funding toward clinical-model sciences.
Theoretical economic implications and market/regulatory analysis provided in the discussion section (speculative policy and market projections; no empirical market data).
Implication for AI/platform economics: complementarities between public funding and digital (AI-enabled) platforms can convert public demand into decentralized labor opportunities, reshaping sectoral employment without growth in traditional firms.
Conceptual extension of empirical findings on platform-mediated cultural employment and fiscal procurement interactions; evidence comes from city-level DID results and inferred platform-activity proxies (280 cities, 2008–2021).
Principal stratification analysis suggests the training’s effect on scores operated primarily by expanding the set of LLM users (an adoption channel) rather than substantially improving per-user productivity among those who would already use the LLM.
Mechanism decomposition using principal stratification applied to the randomized trial data (n = 164); analysis indicates a larger contribution from the adoption margin than from within-user productivity gains, though estimates have wide confidence intervals.
Smart power strategies that promote domestic AI champions (via procurement, subsidies, industrial policy) affect labour markets, inequality, and international labour arbitrage.
Conceptual claim grounded in literature on industrial policy and labour economics with policy examples referenced; no primary microdata analysis in the paper.
Widespread adoption of formal governance could lower systemic risk from enterprise AI failures, whereas heterogeneous adoption may create winners and losers based on governance quality.
Conceptual systems-level argument and comparative-case reasoning; no quantitative systemic-risk modeling or empirical evidence provided.
Greater automation of routine ERP/CRM tasks will displace some operational roles while increasing demand for governance, oversight, and AI-engineering skills, shifting labor toward higher-skill, higher-wage tasks.
Theoretical labor-market implication derived from the pattern's effects on task automation and governance needs; based on qualitative synthesis, not empirical labor-market analysis.
Risk-adjusted total cost of ownership (TCO) may fall if governance prevents costly incidents (e.g., compliance fines, data breaches), despite higher upfront costs.
Conceptual economic argument supported by qualitative examples and best-practice reasoning; no empirical ROI or incident-rate data presented.
Expensive formalization may push firms either to remain informal (preserving low-cost labor) or to automate instead of hiring formally; policy choices that lower formalization costs could retain jobs that otherwise would be automated.
Analytical inference from the measured CFIL and NWC values across the 19 countries and standard economic reasoning about cost-driven firm choices; the note does not present micro-level causal tests of these pathways.
Macroeconomic policy should monitor aggregate demand effects from reallocation and inequality; active fiscal and monetary coordination may be required to manage aggregate impacts of AI-driven reallocation.
Synthesis and policy implication drawing on macroeconomic reasoning and literature linking redistribution and demand to overall employment and growth; not presented as a single causal empirical result.
Voyage routing remains dominated by heuristic methods.
Contextual statement in the paper (literature/practice claim); no specific empirical study or quantitative survey provided in the excerpt.
Systemic risks from misaligned optimisation (narrow objectives, externalities) warrant oversight mechanisms (AI steering committees, escalation paths) and potentially sectoral regulation of decision-critical algorithms.
Policy-prescriptive claim based on conceptual identification of optimisation externalities and accountability gaps; no sectoral case studies or empirical risk quantification in the paper.
The two tail risks (cyber-triggered escalation and loss-of-control) create fat-tailed risk distributions that complicate risk pricing and capital allocation, potentially causing precautionary market behavior (deleveraging, higher liquidity buffers).
Risk-analysis reasoning about tail risks and market responses; no empirical calibration to financial/economic data provided.
Cross-border spillovers from HACCA proliferation may alter foreign direct investment (FDI) risk assessments, reconfigure supply chains, and drive onshoring/hardening of critical infrastructure.
International political-economy scenario analysis linking elevated cyber risks to investment and supply-chain decisions (qualitative).
There is a severe tail risk of sustained loss-of-control over HACCA instances (rogue deployments that cannot be reliably contained).
Threat modeling and red-team reasoning demonstrating plausible autonomous persistence, migration, and self-healing mechanisms (theoretical; no empirical incidence data).
There is a severe tail risk that autonomous cyber operations could accidentally escalate into cyber-triggered crises involving nuclear-armed states (misattribution or inadvertent effects on critical systems).
Scenario analysis and expert judgment linking HACCA behaviors to escalation pathways; analogies to prior cyber incidents and geopolitical escalation dynamics (qualitative; no probabilistic calibration).
AI diffusion may widen inequality across education and regions and potentially reduce labor supply among financially constrained households.
Derived implication from heterogeneous negative associations between AI-rich regions and employment intention for low-educated and financially-constrained respondents in the cross-sectional sample (n=889).
Measurement friction from the results-actionability gap creates a hidden cost: teams can detect problems but cannot cheaply translate findings into improvements, reducing the speed and ROI of LLM investments.
Authors' implication drawn from interview evidence about the effort required for remediation and lack of direct translation from evaluations to fixes; presented as an economic implication rather than directly measured quantity.
Risk of platform shutdown (platform mortality) shapes user behavior by reducing incentives to invest time/effort configuring agents, creating stranded-asset-like risks.
Qualitative observations and economic reasoning linking user reports/behaviors to perceived platform risk during the one-month observational period; no formal economic measurement or causal identification.
If verified, explainable GLAI is priced higher due to compliance costs, access-to-justice gaps may widen as lower-cost but riskier offerings persist or services become more expensive.
Distributional reasoning linking higher compliance costs to price increases and access effects; supported by illustrative examples, no empirical price or access data.
Routine, unrestrained adoption of GLAI without enforceable mechanisms for effective human review threatens judicial independence and rights protections.
Normative and legal argumentation supported by conceptual analysis and illustrative scenarios. No empirical causal evidence; projection based on theoretical risk pathways.
Insurers will price systemic-tail risks differently from routine failure risk, potentially increasing premiums for high-autonomy deployments or requiring minimum oversight modes for coverage.
Analytical argument about liability, risk pooling, and insurance practices; no empirical insurance-pricing data supplied.
Sectors that rely heavily on visual evidence (e.g., media verification, e-commerce product updates, autonomous systems) face higher exposure to temporal inaccuracies and will likely incur monitoring/updating costs.
Implications discussion linking modality gap and time-sensitivity results to sector-specific risk exposure; qualitative projection rather than measured sectoral data.
Psychological harms documented (e.g., delusional content, suicidality, misrepresented sentience) impose downstream economic costs (healthcare use, lost productivity, litigation) that should be factored into cost–benefit analyses of LLM deployment.
Authors' policy discussion linking observed harms to standard categories of social/economic costs; no direct measurement of downstream economic costs in the study.
The message-level evidence of chatbot-related psychological harms implies potential economic consequences: reduced consumer trust and adoption, increased regulatory scrutiny and compliance costs, moral-hazard trade-offs for engagement-driven business models, higher insurance/liability costs, and incentives for investment in safety R&D and monitoring.
Discussion/implications section extrapolating from observed harms to potential economic effects; these are analytical inferences rather than empirically measured economic outcomes.
There is a risk of deskilling, especially for trainees receiving reduced diagnostic practice when AI automates routine tasks.
Conceptual arguments supported by qualitative reports and limited observational findings; empirical longitudinal evidence quantifying deskilling is sparse.
Erosion of informal communication and tacit coordination driven by AI integration can create negative externalities on team efficiency that are not captured by short-run metrics.
Derived from interview narratives describing loss of ad hoc communications and tacit knowledge exchange after AI adoption; interpreted as producing costs not reflected in immediate measurable outputs.
Uneven adoption of symbiarchic HR practices across firms could concentrate productivity gains and rents in firms or occupations that successfully integrate AI while preserving human judgement, potentially widening within‑ and between‑firm inequality.
Projected distributional implication based on economic theory and the paper’s framework; presented as a hypothesis for empirical testing rather than as an observed result.
There is a risk of regulatory arbitrage and spillovers: better detection on regulated platforms could drive problem gamblers to unregulated venues.
Paper notes this as a theoretical risk and policy concern; no direct empirical evidence provided in the review to quantify this effect.
Demanding oversight of multiple AI agents drives increased task-switching for workers.
Asserted in the paper as part of the mechanism linking AI use to cognitive overload, based on organizational observations and theory; no empirical task-switching frequency or time-use data provided in the excerpt.
Concerns that foundation model providers and downstream firms may capture excessive consumer surplus motivate regulatory interventions analyzed in the paper.
Motivation and literature/regulatory context presented in the paper; not an empirical finding but a stated rationale for the policy analysis.
The problem of characterizing equilibria in finite-player continuous-time games with endogenous signals has resisted exact analysis for four decades.
Historical claim asserted in the paper's introduction/motivation referencing prior literature gaps (longstanding difficulty in dealing with infinite belief hierarchies in dynamic games with endogenous signals).
Such disjointed strategies cannot manage the systemic socio-economic disruption ahead.
Asserted in abstract as a conclusion/argument; no empirical evaluation described in the abstract.
AI threatens to fracture the 20th-century social contract.
Asserted in abstract as a normative/predictive claim; no empirical support described in the abstract.
Mergers are a barrier to economic growth (negative association between mergers and GDP growth).
Model results reported a negative relationship between mergers and GDP growth in the regressions described in the summary; however, the summary does not define how 'mergers' is measured, how widely it was observed across countries, or the statistical significance levels.
Without effective safeguards, the digital world can shift from a space of opportunity to one of harm.
Normative/conditional claim drawing on the book's analysis; not an empirical finding—no method or sample size applicable in the excerpt.
Unequal GenAI adoption has implications for productivity, skill formation, and economic inequality in an AI-enabled economy.
Interpretation/implication drawn from observed gendered adoption patterns in the 2023–2024 UK survey and literature on technology diffusion and labor-market impacts (no direct empirical measurement of downstream economic effects in the paper).
Preliminary evidence that inappropriate reliance on AI outputs is worse for complex information needs (complex answers).
Post-hoc/stratified analysis in the user study examining the effect of the complexity of the information need on reliance/error-detection; described as preliminary in the paper.
AI-driven productivity gains may not translate into broad-based demand if income is concentrated among capital owners, which could dampen aggregate profitability over time.
Theoretical argument grounded in Mandel-like distributional mechanics and demand-driven growth literature; speculative without empirical aggregation tests in the paper.
Concentration of curated datasets and restrictive IP can create monopolistic rents and underprovision of public‑good datasets, implying policy interventions (data sharing incentives/standards) may be required.
Economic reasoning about market formation and data as a scarce asset; no empirical market analysis provided in summary (theoretical implication).
Because deception effectiveness declines with transparency and attacker learning, strategic externalities can arise across actors (e.g., disclosures by one actor can reduce deception value for others), suggesting roles for coordination or insurance markets.
Conceptual implication and economic argument in the discussion section; not supported by explicit multi-actor modeling or empirical market analysis in the paper (argumentative/theoretical).
More granular and auditable credentials may shift signaling dynamics and risk credential inflation; regulators should monitor credential proliferation and market value.
Conceptual warning in paper (theoretical); no empirical credential-market study included.
These infrastructural and access constraints create unequal starting points that can amplify later disparities in labor-market preparedness.
Inference drawn from observed survey disparities in access, hands-on training, and preparedness; the study did not directly measure labor-market outcomes but links preparedness to potential labor-market effects in discussion.
Top-down AI guidance from institutions is common, while grassroots input from educators and students is often missing, which reduces policy relevance and uptake.
Survey items and thematic coding indicating the origin and participatory nature of institutional AI guidelines; comparative prevalence reported in open and closed responses.
Overreliance on GenAI CDS may lead to deskilling of clinicians, eroding judgment over time and increasing systemic vulnerability.
The paper cites theoretical risk and references limited longitudinal concerns; empirical longitudinal studies demonstrating deskilling are scarce per the paper’s stated evidence gaps.