Evidence (4560 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Productivity
Remove filter
Insurers may revise underwriting, raise premiums, or exclude certain AI-related exposures until risk assessments improve; new insurance products may emerge for AI governance failures.
Policy and market impact speculation based on perceived risk; no empirical insurer responses or underwriting data provided.
Firms will reallocate resources toward AI governance, monitoring tools, and skilled auditors (increasing compliance and labor costs), and demand for products/services (prompt-provenance tools, watermarking, AI forensic services, certified-safe LLMs) will rise.
Market/economic projection based on the identified threat and presumed demand for mitigations; speculative without market-data support in the paper.
Policy implication: policymakers seeking to balance openness and security should consider layered, adaptive instruments that can be tuned by sector or actor; economic analysis can help identify where centralized coordination yields scale economies versus where decentralized rights‑based approaches preserve competition and trust.
Normative policy recommendation extrapolated from the paper's comparative findings and theoretical framing; not tested empirically in the paper.
Demand for labor may shift from routine instrument operation and image processing toward higher-level tasks (experiment design, oversight, interpretation), and LLMs may amplify productivity of skilled scientists, potentially increasing wage premia for those who supervise AI-guided workflows.
Labor-economics reasoning and analogy to prior automation effects; no empirical labor-market or wage data presented specific to microscopy.
Principal stratification analysis suggests the training’s effect on scores operated primarily by expanding the set of LLM users (an adoption channel) rather than substantially improving per-user productivity among those who would already use the LLM.
Mechanism decomposition using principal stratification applied to the randomized trial data (n = 164); analysis indicates a larger contribution from the adoption margin than from within-user productivity gains, though estimates have wide confidence intervals.
Widespread adoption of formal governance could lower systemic risk from enterprise AI failures, whereas heterogeneous adoption may create winners and losers based on governance quality.
Conceptual systems-level argument and comparative-case reasoning; no quantitative systemic-risk modeling or empirical evidence provided.
Greater automation of routine ERP/CRM tasks will displace some operational roles while increasing demand for governance, oversight, and AI-engineering skills, shifting labor toward higher-skill, higher-wage tasks.
Theoretical labor-market implication derived from the pattern's effects on task automation and governance needs; based on qualitative synthesis, not empirical labor-market analysis.
Risk-adjusted total cost of ownership (TCO) may fall if governance prevents costly incidents (e.g., compliance fines, data breaches), despite higher upfront costs.
Conceptual economic argument supported by qualitative examples and best-practice reasoning; no empirical ROI or incident-rate data presented.
Voyage routing remains dominated by heuristic methods.
Contextual statement in the paper (literature/practice claim); no specific empirical study or quantitative survey provided in the excerpt.
Systemic risks from misaligned optimisation (narrow objectives, externalities) warrant oversight mechanisms (AI steering committees, escalation paths) and potentially sectoral regulation of decision-critical algorithms.
Policy-prescriptive claim based on conceptual identification of optimisation externalities and accountability gaps; no sectoral case studies or empirical risk quantification in the paper.
The two tail risks (cyber-triggered escalation and loss-of-control) create fat-tailed risk distributions that complicate risk pricing and capital allocation, potentially causing precautionary market behavior (deleveraging, higher liquidity buffers).
Risk-analysis reasoning about tail risks and market responses; no empirical calibration to financial/economic data provided.
Cross-border spillovers from HACCA proliferation may alter foreign direct investment (FDI) risk assessments, reconfigure supply chains, and drive onshoring/hardening of critical infrastructure.
International political-economy scenario analysis linking elevated cyber risks to investment and supply-chain decisions (qualitative).
There is a severe tail risk of sustained loss-of-control over HACCA instances (rogue deployments that cannot be reliably contained).
Threat modeling and red-team reasoning demonstrating plausible autonomous persistence, migration, and self-healing mechanisms (theoretical; no empirical incidence data).
There is a severe tail risk that autonomous cyber operations could accidentally escalate into cyber-triggered crises involving nuclear-armed states (misattribution or inadvertent effects on critical systems).
Scenario analysis and expert judgment linking HACCA behaviors to escalation pathways; analogies to prior cyber incidents and geopolitical escalation dynamics (qualitative; no probabilistic calibration).
Measurement friction from the results-actionability gap creates a hidden cost: teams can detect problems but cannot cheaply translate findings into improvements, reducing the speed and ROI of LLM investments.
Authors' implication drawn from interview evidence about the effort required for remediation and lack of direct translation from evaluations to fixes; presented as an economic implication rather than directly measured quantity.
If verified, explainable GLAI is priced higher due to compliance costs, access-to-justice gaps may widen as lower-cost but riskier offerings persist or services become more expensive.
Distributional reasoning linking higher compliance costs to price increases and access effects; supported by illustrative examples, no empirical price or access data.
Routine, unrestrained adoption of GLAI without enforceable mechanisms for effective human review threatens judicial independence and rights protections.
Normative and legal argumentation supported by conceptual analysis and illustrative scenarios. No empirical causal evidence; projection based on theoretical risk pathways.
There is a risk of deskilling, especially for trainees receiving reduced diagnostic practice when AI automates routine tasks.
Conceptual arguments supported by qualitative reports and limited observational findings; empirical longitudinal evidence quantifying deskilling is sparse.
Erosion of informal communication and tacit coordination driven by AI integration can create negative externalities on team efficiency that are not captured by short-run metrics.
Derived from interview narratives describing loss of ad hoc communications and tacit knowledge exchange after AI adoption; interpreted as producing costs not reflected in immediate measurable outputs.
Uneven adoption of symbiarchic HR practices across firms could concentrate productivity gains and rents in firms or occupations that successfully integrate AI while preserving human judgement, potentially widening within‑ and between‑firm inequality.
Projected distributional implication based on economic theory and the paper’s framework; presented as a hypothesis for empirical testing rather than as an observed result.
Demanding oversight of multiple AI agents drives increased task-switching for workers.
Asserted in the paper as part of the mechanism linking AI use to cognitive overload, based on organizational observations and theory; no empirical task-switching frequency or time-use data provided in the excerpt.
Such disjointed strategies cannot manage the systemic socio-economic disruption ahead.
Asserted in abstract as a conclusion/argument; no empirical evaluation described in the abstract.
AI threatens to fracture the 20th-century social contract.
Asserted in abstract as a normative/predictive claim; no empirical support described in the abstract.
Mergers are a barrier to economic growth (negative association between mergers and GDP growth).
Model results reported a negative relationship between mergers and GDP growth in the regressions described in the summary; however, the summary does not define how 'mergers' is measured, how widely it was observed across countries, or the statistical significance levels.
Unequal GenAI adoption has implications for productivity, skill formation, and economic inequality in an AI-enabled economy.
Interpretation/implication drawn from observed gendered adoption patterns in the 2023–2024 UK survey and literature on technology diffusion and labor-market impacts (no direct empirical measurement of downstream economic effects in the paper).
Preliminary evidence that inappropriate reliance on AI outputs is worse for complex information needs (complex answers).
Post-hoc/stratified analysis in the user study examining the effect of the complexity of the information need on reliance/error-detection; described as preliminary in the paper.
AI-driven productivity gains may not translate into broad-based demand if income is concentrated among capital owners, which could dampen aggregate profitability over time.
Theoretical argument grounded in Mandel-like distributional mechanics and demand-driven growth literature; speculative without empirical aggregation tests in the paper.
Concentration of curated datasets and restrictive IP can create monopolistic rents and underprovision of public‑good datasets, implying policy interventions (data sharing incentives/standards) may be required.
Economic reasoning about market formation and data as a scarce asset; no empirical market analysis provided in summary (theoretical implication).
More granular and auditable credentials may shift signaling dynamics and risk credential inflation; regulators should monitor credential proliferation and market value.
Conceptual warning in paper (theoretical); no empirical credential-market study included.
Overreliance on GenAI CDS may lead to deskilling of clinicians, eroding judgment over time and increasing systemic vulnerability.
The paper cites theoretical risk and references limited longitudinal concerns; empirical longitudinal studies demonstrating deskilling are scarce per the paper’s stated evidence gaps.
Commercial structural biology services for routine solved folds may be commoditized, pushing firms toward complex validation, novel targets, or high‑value contract research.
Paper suggests this in 'Disruption of service markets' as a projected industry response; it is a strategic implication rather than an empirically demonstrated trend in the text.
Legacy systems and siloed incentives create switching frictions that slow diffusion of AI-enabled ISP; early adopters may achieve sustained cost and service advantages and vendors bundling technology with change management could capture large rents.
Authors' argument informed by case observations of switching costs and vendor roles; no causal market-level evidence provided.
Returns to AI investments may exhibit increasing returns to scale, reinforcing winner‑take‑most dynamics unless offset by platformization or open‑source diffusion.
Economic scenario reasoning on capital intensity and platform effects; no empirical calibration or econometric evidence provided.
Because feedbacks from capital and labor onto AI are weak, AI can grow rapidly and may lead to lock-in, concentration, and distributional risks that warrant monitoring and possible redistributive or competition policies.
Empirical finding of weak negative feedbacks to AI in estimated interaction coefficients combined with theoretical interpretation about growth and lock-in risks.
Inadequate protections reduce public trust in mobile-AI services, which can slow diffusion and undercut the growth trajectories that policy narratives anticipate.
Inferred from stakeholder commentary and policy discourse combined with communication-rights theory; the paper does not present survey or adoption-rate data.
Low-wage and platform workers are particularly exposed to algorithmic management and surveillance, with potential downward pressure on wages, bargaining power, and job quality.
The paper's qualitative analysis of stakeholder comments and policy omissions, combined with literature-based inference about platform labor dynamics; no primary labor-market survey or quantitative wage data provided.
Soft‑law governance and growth-first narratives risk concentrating benefits (investment, productivity gains) while externalizing costs (privacy harms, biased decisioning) onto vulnerable populations, exacerbating inequality and reducing inclusive economic development.
Analytic inference from qualitative review of governance instruments and policy narratives combined with communications-ecology and political-economy reasoning; not based on quantitative economic measurement in the paper.
Legal liability and cyber-insurance markets will need to adapt as machine-generated code becomes pervasive, with pricing internalizing risk from inadequate verification processes.
Speculative legal/economic implication discussed in the paper; no actuarial or legal-case data provided.
Individual developers or firms may underinvest in verification because defect accumulation imposes external costs on downstream actors, creating market failures that can justify standards, certifications, or regulation mandating interlocks or minimum verification practices.
Policy and market-failure argument based on externalities presented conceptually; no modeling or empirical evidence of such externalities provided.
Short-run productivity gains from generative AI may be offset by longer-run increases in maintenance, security breaches, and reliability costs if verification lags.
Economic reasoning and forward-looking implications discussed in the paper; no empirical cost-benefit or longitudinal data presented.
Small, unverified errors, insecure patterns, and brittle interactions accumulate over time (latent accumulation), increasing operational fragility and long-run maintenance costs.
Theoretical argument and illustrative examples in the paper; no longitudinal defect accumulation studies or empirical cost analysis provided.
Time pressure and productivity incentives lead developers to accept plausible AI outputs without full validation, a behavioral/institutional failure mode called the 'micro-coercion of speed' that effectively reverses the burden of proof.
Behavioral diagnosis and incentive analysis presented conceptually in the paper; no behavioral experiments, surveys, or observational data reported.
Hallucination and error risk introduce potential liabilities in client engagements and may change contracting, insurance, and pricing practices in consulting services.
Derived from practitioner concerns reported in interviews and authors' normative discussion; no contractual or insurance-market data presented.
Effective deployment requires governance, verification processes, and liability management to manage hallucination risk, creating adoption costs that may advantage larger firms and affect market concentration and pricing power.
Argument based on interviews about necessary organizational safeguards and the resource requirements to implement them; speculative market-structure implications are not empirically tested in the paper.
Widespread GenAI use may accelerate skill obsolescence for routine competencies and increase the premium on monitoring, critical evaluation, and AI‑integration skills, shifting investment toward retraining and upskilling.
Projection based on qualitative interviews and the authors' economic interpretation of TGAIF; no longitudinal or wage/skill data provided.
Uncertainty about long-run agentic behavior increases option value and downside risk of investing in agentic systems, which may raise discount rates and required returns.
Economic argument applying risk/return logic to agentic uncertainty; no quantitative empirical evidence provided.
Economic rents and advantages may accrue to agents who control large datasets, computing resources, and organizational processes that effectively integrate AI as a co-pilot, potentially increasing market concentration among AI providers.
Economic theory on scale economies and platform effects combined with observed industry patterns; reviewed literature provides conceptual arguments and case examples rather than broad empirical market-structure measurement.
Generative AI poses substitution risk for entry-level or routine cognitive work focused on generation or drafting without evaluative responsibility.
Task-based analyses and case studies indicating automation potential for routine generation tasks; empirical demonstrations of AI-produced drafts/outputs that could replace such work, but longer-run displacement evidence is limited.
Upfront integration and recurring governance costs mean smaller firms may face higher relative costs — potentially increasing scale advantages for larger incumbents.
Deployment case studies and cost reports indicating significant fixed integration and governance costs; inference to market structure is speculative.
There is a risk of deskilling through excessive reliance on AI, implying a need for continuous training and certification to preserve human judgment.
Qualitative interview evidence and observed concerns about overreliance; authors recommend training/governance based on identified risks; no direct longitudinal measurement of deskilling provided in summary.