Evidence (5126 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Adoption
Remove filter
In markets with near-zero marginal costs and free entry, increases in the number of producers dilute average attention and returns per producer.
Formal theoretical model introduced in the paper (Builder Saturation Effect) that assumes near-zero marginal costs, free entry, and finite human attention; no empirical sample or experimental data reported.
Insufficient organizational resources significantly inhibit AI adoption in procurement (β = -0.19, p < 0.05).
Same questionnaire survey (n=326) and multiple linear regression analysis; reported coefficient β=-0.19 with p<0.05.
Measuring only technical model performance (such as predictive accuracy) is insufficient for assessing the strategic impact of AI in drug discovery.
Argued in the paper as a critique of current evaluation practices; presented as a conceptual point rather than supported by new empirical data in the excerpt.
Pressure remains high to increase the probability of success to improve the effectiveness of pharmaceutical R&D.
Asserted in the paper as motivational context for the work; framed as an industry pressure point rather than backed by a specific empirical sample or quantified survey in the excerpt.
Increasing cost and failure rates in the pharmaceutical R&D process have not fundamentally improved over the last decade.
Stated as a contextual observation in the paper's opening paragraph; presented as a summary of industry trends (no specific dataset, sample size, or citation included in the excerpt).
Reliance on automated content generation introduces risks of cognitive overreliance, algorithmic bias, and strategic misalignment.
The paper articulates these risks as conceptual/qualitative concerns in its discussion; no quantitative estimates or empirical tests of these specific risks are reported in the provided excerpt.
Current (pay-upfront) models impose a financial barrier to entry for developers, limiting innovation and excluding actors from emerging economies.
Analytical argument in the paper based on cost-structure reasoning and literature on barriers to entry; no empirical sample or causal estimate provided.
Developers and experts still lack a shared view, resulting in repeated coordination, clarification rounds, and error-prone handoffs.
Observational/qualitative claim in paper describing current MSD practice (no numeric sample reported).
Even with AI coding assistants like GitHub Copilot, individual coding tasks are semi-automated, but the workflow connecting domain knowledge to implementation is not.
Qualitative observation/comparative statement in paper (no empirical sample reported).
Multidisciplinary Software Development (MSD) requires domain experts and developers to collaborate across incompatible formalisms and separate artifact sets.
Conceptual/argument in paper framing the problem (no empirical sample reported).
Strict data sovereignty laws fragment regional collaboration between African Union member states and hinder AI development.
Stated in the paper as a policy barrier; supported by the authors' policy review of data sovereignty rules and their implications for cross-border data sharing.
Restricted cloud access due to payment system mismatches and volatile exchange rates is a barrier to AI adoption in Africa.
Claim made in the paper as part of the list of barriers; based on the authors' qualitative and quantitative review and reference to policy/financial constraints across African countries.
Important barriers include limited access to high-performance computing (HPC).
Paper identifies limited HPC access as a key barrier; supported by the authors' collection and consolidation of HPC availability data via the Africa AI Compute Tracker (ACT).
Africa's participation in modern AI development is constrained by severe infrastructural and policy gaps.
Stated as a central argument in the paper; supported by the paper's synthesis of qualitative and quantitative evidence and reference to official declarations on AI adoption across the continent.
Only 12% of AI market value is used in physical activities.
Descriptive aggregate: authors categorize and report that 12% of estimated AI market value maps to physical activities.
Off-the-shelf implementations of DRL have seen mixed success, often plagued by high sensitivity to the hyperparameters used during training.
Statement in the paper's abstract describing observed/prior performance issues with standard DRL implementations; implies literature/empirical experience but no specific experiment/sample given in the abstract.
Coal-based energy consumption structure and a secondary-industry-dominated industrial structure significantly inhibit regional TFCP and have strong negative spatial spillovers.
Control-variable coefficients from Spatial Durbin Model on panel data (30 provinces, 2010–2023) showing statistically significant negative direct and indirect effects for coal-dominant energy structure and secondary-industry share.
The most vulnerable occupational groups to AI-driven transformation are office workers, data entry operators, call center workers, accountants, and administrative staff with routine analytical and administrative tasks.
Results of the envelope-model assessment for the sampled European Union countries that identify occupations with high exposure/vulnerability to AI-driven change; occupations are listed explicitly in the paper.
AI appears to be a diffusing technology, not an emerging occupation.
Synthesis of empirical findings: presence of a shared vocabulary but lack of a coherent practitioner population in resume data, interpreted as diffusion of AI skills/vocabulary across existing roles.
Across heterogeneous learners, a common broadcast curriculum can be slower than personalized instruction by a factor linear in the number of learner types.
Theoretical comparative result in the model (analysis of broadcast vs personalized curricula across heterogeneous learner types; abstract states factor linear in number of types).
The findings provide evidence against cue-based accounts of lie detection more generally.
Authors' interpretation: because lie-detection accuracy did not decrease despite changes to visual cues (retouching, backgrounds, avatars), the results challenge theories that rely on superficial cues for lie detection.
Participants' confidence in their judgments declined in AI-mediated videos, particularly when some participants used avatars while others did not.
Experimental comparisons across conditions with varying levels of AI mediation; subgroup/condition contrast highlighting larger declines in mixed-avatar settings.
Perceived trust in speakers declined in AI-mediated videos.
Experimental results from the two preregistered online experiments comparing perceived trust across varying levels of AI mediation (retouching, background replacement, avatars).
AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility.
Motivating claim stated in the paper's introduction/abstract; not an empirical finding but a hypothesis motivating the experiments.
AI adoption faces critical obstacles originating from digital illiteracy, poor Internet access, excessive application costs, and the rural-to-urban divide.
Survey findings and interview themes from the mixed-methods study (survey n=293; interviews n=12) identifying barriers to AI adoption.
Users still had concerns about how AI credit assessments and chatbots operate.
Qualitative interview data (n=12) and/or survey responses (n=293) reporting user concerns about AI credit scoring and chatbots.
Adoption barriers exist, particularly for small and medium-sized enterprises and firms in emerging economies, where capability and data constraints limit impact.
Findings reported from the systematic review and mixed-methods assessment (abstract references barriers observed across reviewed studies); number of studies reported in abstract is 104 for the systematic review.
AI can initially exacerbate distributional injustice.
Dimension-level analysis indicating negative (or initially negative) effects of AI on the distributional component of the energy justice index.
There are few integrated frameworks (bridging ethics and technical controls) in the current AI governance landscape.
Result of the literature review and cluster analysis showing limited coverage of frameworks that integrate ethical principles with auditable technical controls.
Findings reveal a fragmented landscape dominated by ethics/privacy-centric and compliance/risk-focused approaches.
Synthesis of the reviewed literature and results of PCA/k-means clustering indicate thematic dominance of ethics/privacy and compliance/risk orientations across frameworks.
Significant limitations emerged in case law citations, with most cited cases being non-existent or incorrectly referenced.
Authors' review of the case citations produced by the four AI engines for the single transcript, finding many citations were fabricated or misreferenced.
GDP growth is initially negatively affected by the ageing population.
Estimated negative association reported in panel threshold regressions using provincial panel data (31 provinces, 2000–2022); ageing operationalized (primary specification) as an ageing measure (paper also tests old-age dependency ratio).
These findings highlight fundamental challenges in the numerical and time-series reasoning for current LLMs and motivate future research in financial intelligence.
Interpretation of experimental results in the paper: authors conclude that the observed limited gains (particularly on trading-signal/time-series aspects) indicate shortcomings in LLM numerical and time-series reasoning.
This result directly contradicts classical scaling laws which assume monotonic capability gains with model scale.
Comparative theoretical claim in the paper contrasting the Institutional Scaling Law with classical empirical/theoretical scaling laws in ML literature.
The Institutional Scaling Law proves that institutional fitness is non-monotonic in model scale.
Formal mathematical derivation/proof presented in the paper (the 'Institutional Scaling Law').
AI development proceeds not through smooth advancement but through extended periods of stasis interrupted by rapid phase transitions that reorganize the competitive landscape (punctuated equilibrium pattern).
Argument based on punctuated equilibrium theory from evolutionary biology and historical analysis presented in the paper identifying discrete transitions in AI history; the paper cites and classifies eras/events as evidence.
The interaction of artificial intelligence and environmental regulation produces a '1 + 1 < 2' crowding-out effect (their combined effect is less than the sum of individual effects).
Spatial Durbin model with interaction term between AI and environmental regulation as summarized in the abstract; reported as a crowding-out interaction.
Environmental regulation significantly inhibits local UCEE.
Spatial Durbin model results reported in the abstract indicating a significant negative local coefficient for environmental regulation.
Artificial intelligence significantly inhibits local UCEE.
Spatial Durbin model results reported in the abstract indicating a significant negative local coefficient for artificial intelligence.
Rather than broad job losses, evidence points to a reallocation at the entry level: AI automates tasks typically assigned to junior staff, shifting the nature of entry-level roles.
Synthesis of firm- and task-level empirical studies reported in the brief documenting automation of routine/junior tasks and changes in job-task composition; specific sample sizes vary by cited study and are not provided in the brief.
Algorithmic credit systems are linked to higher levels of financial stress.
Study reports a positive association between algorithmic credit system use and reported financial stress from regression analysis on the 400-user cross-sectional dataset.
It is impractical to uniformly apply an alignment method across diverse, independently developed AI models in strategic settings.
Paper assertion / motivating argument (stated as motivation for investigating zero-shot Nash-like behavior); not presented as an empirical finding within the paper.
The gap between informal natural language requirements and precise program behavior (the 'intent gap') has always plagued software engineering, but AI-generated code amplifies it to an unprecedented scale.
Conceptual claim and argumentation in the paper; presented as an observed escalation in the scale of the existing 'intent gap' due to AI code generation. No quantitative evidence or sample size given in the excerpt.
The establishment of the China–ASEAN Free Trade Area (CAFTA) reduced regional trade policy uncertainty.
Empirical analysis treats CAFTA as an exogenous policy shock and measures a decline in regional trade policy uncertainty using firm‑ and trade‑level data from the China Industrial Enterprise Database and China Customs Database covering 2000–2014; identification via difference‑in‑differences (DID). (Sample sizes not specified in provided summary.)
Some declines (in self-efficacy and meaningfulness) from passive AI use persist after participants return to manual work.
Within-experiment assessment of outcomes after participants returned to manual (no-AI) tasks following the AI-use manipulation in the pre-registered experiment (N = 269); reported persistent reductions in self-efficacy and meaningfulness for the passive condition.
Passive use of AI reduces perceived meaningfulness of work.
Pre-registered experiment (N = 269) with self-reported measure of work meaningfulness; passive-copy condition showed lower meaningfulness ratings than No-AI and Active-collaboration conditions.
Passive use of AI reduces psychological ownership of the produced outputs.
Same pre-registered experiment (N = 269). Participants in the passive-copy AI condition reported lower psychological ownership of their outputs (self-report scales) relative to No-AI and Active-collaboration conditions.
Passive use of AI (copying AI-generated output) reduces workers' self-efficacy.
Pre-registered between-subjects experiment (N = 269) using occupation-specific writing tasks. Participants assigned to a passive-copy AI condition reported lower self-efficacy (self-reported confidence to complete tasks without AI) compared to the No-AI (manual) and Active-collaboration conditions.
Securitization of economic dependencies—especially in strategic sectors (semiconductors, telecoms, cloud)—frames partner states as security risks and exposes them to blacklists, de-risking campaigns, and sudden loss of market access.
Process tracing of export controls and blacklisting episodes; chronologies of sanction/policy actions affecting firms and partners; policy documents and public lists (e.g., export-control lists). (Data sources: export-control lists, sanction policy documents, corporate/access denials; sample sizes not specified.)
Stronger empirical evidence is needed on how hazard, exposure, and vulnerability interact across space and time to shape aggregated multi-risks.
Evaluation of project activities and case studies identifying gaps in empirical spatio-temporal analyses of interacting risk components; synthesis recommends targeted empirical work.