Evidence (4049 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Governance
Remove filter
Widespread adoption of formal governance could lower systemic risk from enterprise AI failures, whereas heterogeneous adoption may create winners and losers based on governance quality.
Conceptual systems-level argument and comparative-case reasoning; no quantitative systemic-risk modeling or empirical evidence provided.
Greater automation of routine ERP/CRM tasks will displace some operational roles while increasing demand for governance, oversight, and AI-engineering skills, shifting labor toward higher-skill, higher-wage tasks.
Theoretical labor-market implication derived from the pattern's effects on task automation and governance needs; based on qualitative synthesis, not empirical labor-market analysis.
Risk-adjusted total cost of ownership (TCO) may fall if governance prevents costly incidents (e.g., compliance fines, data breaches), despite higher upfront costs.
Conceptual economic argument supported by qualitative examples and best-practice reasoning; no empirical ROI or incident-rate data presented.
Macroeconomic policy should monitor aggregate demand effects from reallocation and inequality; active fiscal and monetary coordination may be required to manage aggregate impacts of AI-driven reallocation.
Synthesis and policy implication drawing on macroeconomic reasoning and literature linking redistribution and demand to overall employment and growth; not presented as a single causal empirical result.
The two tail risks (cyber-triggered escalation and loss-of-control) create fat-tailed risk distributions that complicate risk pricing and capital allocation, potentially causing precautionary market behavior (deleveraging, higher liquidity buffers).
Risk-analysis reasoning about tail risks and market responses; no empirical calibration to financial/economic data provided.
Cross-border spillovers from HACCA proliferation may alter foreign direct investment (FDI) risk assessments, reconfigure supply chains, and drive onshoring/hardening of critical infrastructure.
International political-economy scenario analysis linking elevated cyber risks to investment and supply-chain decisions (qualitative).
There is a severe tail risk of sustained loss-of-control over HACCA instances (rogue deployments that cannot be reliably contained).
Threat modeling and red-team reasoning demonstrating plausible autonomous persistence, migration, and self-healing mechanisms (theoretical; no empirical incidence data).
There is a severe tail risk that autonomous cyber operations could accidentally escalate into cyber-triggered crises involving nuclear-armed states (misattribution or inadvertent effects on critical systems).
Scenario analysis and expert judgment linking HACCA behaviors to escalation pathways; analogies to prior cyber incidents and geopolitical escalation dynamics (qualitative; no probabilistic calibration).
AI diffusion may widen inequality across education and regions and potentially reduce labor supply among financially constrained households.
Derived implication from heterogeneous negative associations between AI-rich regions and employment intention for low-educated and financially-constrained respondents in the cross-sectional sample (n=889).
Measurement friction from the results-actionability gap creates a hidden cost: teams can detect problems but cannot cheaply translate findings into improvements, reducing the speed and ROI of LLM investments.
Authors' implication drawn from interview evidence about the effort required for remediation and lack of direct translation from evaluations to fixes; presented as an economic implication rather than directly measured quantity.
Risk of platform shutdown (platform mortality) shapes user behavior by reducing incentives to invest time/effort configuring agents, creating stranded-asset-like risks.
Qualitative observations and economic reasoning linking user reports/behaviors to perceived platform risk during the one-month observational period; no formal economic measurement or causal identification.
If verified, explainable GLAI is priced higher due to compliance costs, access-to-justice gaps may widen as lower-cost but riskier offerings persist or services become more expensive.
Distributional reasoning linking higher compliance costs to price increases and access effects; supported by illustrative examples, no empirical price or access data.
Routine, unrestrained adoption of GLAI without enforceable mechanisms for effective human review threatens judicial independence and rights protections.
Normative and legal argumentation supported by conceptual analysis and illustrative scenarios. No empirical causal evidence; projection based on theoretical risk pathways.
Insurers will price systemic-tail risks differently from routine failure risk, potentially increasing premiums for high-autonomy deployments or requiring minimum oversight modes for coverage.
Analytical argument about liability, risk pooling, and insurance practices; no empirical insurance-pricing data supplied.
Sectors that rely heavily on visual evidence (e.g., media verification, e-commerce product updates, autonomous systems) face higher exposure to temporal inaccuracies and will likely incur monitoring/updating costs.
Implications discussion linking modality gap and time-sensitivity results to sector-specific risk exposure; qualitative projection rather than measured sectoral data.
Psychological harms documented (e.g., delusional content, suicidality, misrepresented sentience) impose downstream economic costs (healthcare use, lost productivity, litigation) that should be factored into cost–benefit analyses of LLM deployment.
Authors' policy discussion linking observed harms to standard categories of social/economic costs; no direct measurement of downstream economic costs in the study.
The message-level evidence of chatbot-related psychological harms implies potential economic consequences: reduced consumer trust and adoption, increased regulatory scrutiny and compliance costs, moral-hazard trade-offs for engagement-driven business models, higher insurance/liability costs, and incentives for investment in safety R&D and monitoring.
Discussion/implications section extrapolating from observed harms to potential economic effects; these are analytical inferences rather than empirically measured economic outcomes.
There is a risk of deskilling, especially for trainees receiving reduced diagnostic practice when AI automates routine tasks.
Conceptual arguments supported by qualitative reports and limited observational findings; empirical longitudinal evidence quantifying deskilling is sparse.
Uneven adoption of symbiarchic HR practices across firms could concentrate productivity gains and rents in firms or occupations that successfully integrate AI while preserving human judgement, potentially widening within‑ and between‑firm inequality.
Projected distributional implication based on economic theory and the paper’s framework; presented as a hypothesis for empirical testing rather than as an observed result.
There is a risk of regulatory arbitrage and spillovers: better detection on regulated platforms could drive problem gamblers to unregulated venues.
Paper notes this as a theoretical risk and policy concern; no direct empirical evidence provided in the review to quantify this effect.
Concerns that foundation model providers and downstream firms may capture excessive consumer surplus motivate regulatory interventions analyzed in the paper.
Motivation and literature/regulatory context presented in the paper; not an empirical finding but a stated rationale for the policy analysis.
The problem of characterizing equilibria in finite-player continuous-time games with endogenous signals has resisted exact analysis for four decades.
Historical claim asserted in the paper's introduction/motivation referencing prior literature gaps (longstanding difficulty in dealing with infinite belief hierarchies in dynamic games with endogenous signals).
Such disjointed strategies cannot manage the systemic socio-economic disruption ahead.
Asserted in abstract as a conclusion/argument; no empirical evaluation described in the abstract.
AI threatens to fracture the 20th-century social contract.
Asserted in abstract as a normative/predictive claim; no empirical support described in the abstract.
Without effective safeguards, the digital world can shift from a space of opportunity to one of harm.
Normative/conditional claim drawing on the book's analysis; not an empirical finding—no method or sample size applicable in the excerpt.
AI-driven productivity gains may not translate into broad-based demand if income is concentrated among capital owners, which could dampen aggregate profitability over time.
Theoretical argument grounded in Mandel-like distributional mechanics and demand-driven growth literature; speculative without empirical aggregation tests in the paper.
Concentration of curated datasets and restrictive IP can create monopolistic rents and underprovision of public‑good datasets, implying policy interventions (data sharing incentives/standards) may be required.
Economic reasoning about market formation and data as a scarce asset; no empirical market analysis provided in summary (theoretical implication).
Because deception effectiveness declines with transparency and attacker learning, strategic externalities can arise across actors (e.g., disclosures by one actor can reduce deception value for others), suggesting roles for coordination or insurance markets.
Conceptual implication and economic argument in the discussion section; not supported by explicit multi-actor modeling or empirical market analysis in the paper (argumentative/theoretical).
More granular and auditable credentials may shift signaling dynamics and risk credential inflation; regulators should monitor credential proliferation and market value.
Conceptual warning in paper (theoretical); no empirical credential-market study included.
These infrastructural and access constraints create unequal starting points that can amplify later disparities in labor-market preparedness.
Inference drawn from observed survey disparities in access, hands-on training, and preparedness; the study did not directly measure labor-market outcomes but links preparedness to potential labor-market effects in discussion.
Top-down AI guidance from institutions is common, while grassroots input from educators and students is often missing, which reduces policy relevance and uptake.
Survey items and thematic coding indicating the origin and participatory nature of institutional AI guidelines; comparative prevalence reported in open and closed responses.
Overreliance on GenAI CDS may lead to deskilling of clinicians, eroding judgment over time and increasing systemic vulnerability.
The paper cites theoretical risk and references limited longitudinal concerns; empirical longitudinal studies demonstrating deskilling are scarce per the paper’s stated evidence gaps.
Commercial structural biology services for routine solved folds may be commoditized, pushing firms toward complex validation, novel targets, or high‑value contract research.
Paper suggests this in 'Disruption of service markets' as a projected industry response; it is a strategic implication rather than an empirically demonstrated trend in the text.
Organizational compliance, governance, and transaction costs shape which AI uses are feasible, producing heterogeneity in adoption across firms; trust and accountability frictions can slow adoption even when productivity gains exist.
Workshop participants (n=15) reported compliance and governance considerations; authors infer broader organizational heterogeneity and friction effects from these qualitative data.
Designers’ expressed concerns about skill development suggest potential long-term effects on human capital accumulation; adoption that reduces learning opportunities could lower future wages or employability.
Participants' concerns captured in qualitative workshops (n=15); claim is an extrapolation to labor-market outcomes rather than direct measurement in the study.
Private governance and firm-level solutions (internal standards, bargaining with unions) may proliferate, but these can entrench firm-specific norms and increase market power asymmetries.
Conceptual argument drawing on governance and industrial organization literature; no empirical measurement of prevalence or market-power effects included.
Inadequate protections reduce public trust in mobile-AI services, which can slow diffusion and undercut the growth trajectories that policy narratives anticipate.
Inferred from stakeholder commentary and policy discourse combined with communication-rights theory; the paper does not present survey or adoption-rate data.
Low-wage and platform workers are particularly exposed to algorithmic management and surveillance, with potential downward pressure on wages, bargaining power, and job quality.
The paper's qualitative analysis of stakeholder comments and policy omissions, combined with literature-based inference about platform labor dynamics; no primary labor-market survey or quantitative wage data provided.
Soft‑law governance and growth-first narratives risk concentrating benefits (investment, productivity gains) while externalizing costs (privacy harms, biased decisioning) onto vulnerable populations, exacerbating inequality and reducing inclusive economic development.
Analytic inference from qualitative review of governance instruments and policy narratives combined with communications-ecology and political-economy reasoning; not based on quantitative economic measurement in the paper.
Uncertainty about long-run agentic behavior increases option value and downside risk of investing in agentic systems, which may raise discount rates and required returns.
Economic argument applying risk/return logic to agentic uncertainty; no quantitative empirical evidence provided.
Economic rents and advantages may accrue to agents who control large datasets, computing resources, and organizational processes that effectively integrate AI as a co-pilot, potentially increasing market concentration among AI providers.
Economic theory on scale economies and platform effects combined with observed industry patterns; reviewed literature provides conceptual arguments and case examples rather than broad empirical market-structure measurement.
Generative AI poses substitution risk for entry-level or routine cognitive work focused on generation or drafting without evaluative responsibility.
Task-based analyses and case studies indicating automation potential for routine generation tasks; empirical demonstrations of AI-produced drafts/outputs that could replace such work, but longer-run displacement evidence is limited.
Upfront integration and recurring governance costs mean smaller firms may face higher relative costs — potentially increasing scale advantages for larger incumbents.
Deployment case studies and cost reports indicating significant fixed integration and governance costs; inference to market structure is speculative.
There is a risk of deskilling through excessive reliance on AI, implying a need for continuous training and certification to preserve human judgment.
Qualitative interview evidence and observed concerns about overreliance; authors recommend training/governance based on identified risks; no direct longitudinal measurement of deskilling provided in summary.
Recommendation algorithms and widespread automated advice can induce herding or increase common exposures across retail investor portfolios, with potential macroprudential implications.
Theoretical discussion supported by examples from retail trading episodes and algorithmic amplification literature referenced in the review (conceptual and anecdotal evidence; limited systematic empirical quantification).
Insurance markets may price AI-specific fraud risk, raising premiums or creating new products (AI-fraud insurance).
Speculative economic implication suggested by the authors; no market data or insurer statements cited.
Vendors offering integrated governed hyperautomation stacks may capture premium pricing and increase switching costs, potentially widening adoption gaps between large incumbents and SMEs.
Market-structure and competitive dynamics discussed theoretically in the Implications section; no market-share or pricing data provided.
Higher compliance and liability costs may be passed to districts, potentially affecting the affordability of EdTech for underfunded schools unless federal guidance or subsidies offset costs — a distributional concern.
Economic distributional reasoning (theoretical), not supported by empirical pricing or budget impact data in the Article.
Regulators and standard-setters who value transparency and auditability will need to account for the gap between evaluation results and actionable fixes; firms may require incentives or rules to ensure evaluation leads to remediation, not just documentation.
Authors' policy implication derived from the study's finding of a results-actionability gap and discussion of auditability concerns; speculative recommendation rather than empirical finding.
This study represents the first attempt to conduct a comprehensive evaluation of artificial intelligence (AI) and its influence on job displacement based on the existing body of literature.
Author assertion in the paper; the excerpt provides no external verification (no citation of prior reviews/meta-analyses to justify the 'first attempt' claim).