Evidence (1286 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Inequality
Remove filter
Aggregate productivity (output per worker or per unit of inputs) can rise while labor’s share and employment decline due to substitution toward K_T.
Macro growth-accounting exercises decomposing output growth into contributions from labor, traditional capital, and technological capital; model simulations showing productivity gains coexisting with falling labor shares under substitution elasticities.
Data reveals that less than 0.7% of the Indian population uses AI-induced ride services.
Empirical statistic reported in the paper (declared as data) quantifying the share of the population using AI-induced ride services.
The lack of a significant worsening in transportation-sector inequality can be attributed to sluggish demand switching from non-AI to AI-based services in India.
Argument in the paper linking empirical finding (no significant increase in inequality) to low observed adoption rates of AI-based ride services; supported by reported adoption statistic.
This inefficiency directly undermines UN Sustainable Development Goals 13 (Climate Action) and 10 (Reduced Inequalities) by hindering equitable AI access in resource-constrained regions.
Normative/analytic claim in the paper linking energy inefficiency to negative impacts on specific UN SDGs (argumentative, not empirically quantified in the abstract).
Current paradigms indiscriminately apply computation-intensive strategies like Chain-of-Thought (CoT) to billions of daily queries, causing LLM overthinking that amplifies carbon emissions and operational barriers.
Claim/assertion in the paper framing the problem (conceptual/observational argument; no specific empirical backing provided in the abstract).
There is a potential for exclusion due to limited digital footprints, which can limit who benefits from AI-driven finance.
Abstract explicitly identifies potential exclusion of people with limited digital footprints as a challenge, based on qualitative interviews and case-study evidence.
Data privacy concerns are a notable challenge in deploying AI-driven financial solutions.
Abstract lists data privacy concerns among identified challenges drawn from interviews and analysis across the three case studies.
Infrastructure limitations pose a barrier to adoption and effective use of AI-enabled financial services.
Abstract identifies infrastructure limitations as a challenge, based on qualitative interviews and case-study evidence.
Digital literacy gaps are a challenge limiting the effectiveness and inclusion of AI-driven financial solutions.
Abstract lists digital literacy gaps among identified challenges, based on qualitative insights from the 1,500 interviews and case-study observations.
Policymakers in the EU and beyond will need to change course, and soon, if they are to effectively govern the next generation of AI technology.
Authors' prescriptive conclusion based on their analysis of shortcomings in the EU AI Act and institutional frameworks (policy recommendation; no empirical sample size in excerpt).
The Act's allocation of monitoring and enforcement responsibilities, reliance on industry self-regulation, and level of government resourcing illustrate how a regulatory framework designed for conventional AI systems can be ill-suited to AI agents.
Authors' institutional analysis of the EU AI Act's monitoring/enforcement allocation, reliance on self-regulation, and resourcing (qualitative legal/institutional analysis; no quantitative sample size in excerpt).
The EU AI Act faces significant obstacles in confronting governance challenges arising from AI agents, such as unequal access to the economic opportunities afforded by AI agents.
Authors' argument that the Act may not prevent or address unequal access to benefits of AI agents (policy/legal analysis; no empirical sample size in excerpt).
The EU AI Act faces significant obstacles in confronting governance challenges arising from AI agents, such as the risk of misuse of agents by malicious actors.
Authors' analysis highlighting misuse risks and the Act's limitations in addressing them (policy/legal analysis; no empirical sample size in excerpt).
The EU AI Act faces significant obstacles in confronting governance challenges arising from AI agents, such as performance failures in autonomous task execution.
Authors' analytical argument that the Act's design and provisions do not adequately address autonomous performance failures (policy/legal analysis; no empirical sample size provided in excerpt).
The EU AI Act was promulgated prior to the development and widespread use of AI agents.
Factual/timing claim by the authors referencing the Act's adoption date relative to development and proliferation of AI agents (historical/policy analysis; dates verifiable externally).
AI agents present particularly pressing questions for the European Union's AI Act.
Authors' normative/analytical claim based on the perceived fit between AI agents' characteristics and the EU AI Act's design (policy/legal analysis; no empirical sample size in excerpt).
AI can promote enterprises to adopt different income distribution modes by improving the marginal output of capital and substituting low-skilled labor (technology bias).
Theoretical mechanism articulated in the paper based on capital-labor substitution principle and factor reward theory; implied empirical testing using firm-level data.
AI-enabled, democratised production is more likely to intensify competition and produce winner-take-most outcomes than to generate broadly distributed entrepreneurial success.
Synthesised theoretical prediction based on the unified framework (attention scarcity + free-entry dilution + superstar/preferential attachment dynamics) developed in the paper; no empirical validation provided.
When the framework is extended to include quality heterogeneity and reinforcement dynamics, equilibrium outcomes exhibit declining average payoffs.
Analytical extension of the baseline formal model to incorporate heterogeneous quality and reinforcement (preferential attachment) dynamics; theoretical derivation in the paper; no empirical sample.
In markets with near-zero marginal costs and free entry, increases in the number of producers dilute average attention and returns per producer.
Formal theoretical model introduced in the paper (Builder Saturation Effect) that assumes near-zero marginal costs, free entry, and finite human attention; no empirical sample or experimental data reported.
Current (pay-upfront) models impose a financial barrier to entry for developers, limiting innovation and excluding actors from emerging economies.
Analytical argument in the paper based on cost-structure reasoning and literature on barriers to entry; no empirical sample or causal estimate provided.
AI adoption faces critical obstacles originating from digital illiteracy, poor Internet access, excessive application costs, and the rural-to-urban divide.
Survey findings and interview themes from the mixed-methods study (survey n=293; interviews n=12) identifying barriers to AI adoption.
Users still had concerns about how AI credit assessments and chatbots operate.
Qualitative interview data (n=12) and/or survey responses (n=293) reporting user concerns about AI credit scoring and chatbots.
AI can initially exacerbate distributional injustice.
Dimension-level analysis indicating negative (or initially negative) effects of AI on the distributional component of the energy justice index.
Rather than broad job losses, evidence points to a reallocation at the entry level: AI automates tasks typically assigned to junior staff, shifting the nature of entry-level roles.
Synthesis of firm- and task-level empirical studies reported in the brief documenting automation of routine/junior tasks and changes in job-task composition; specific sample sizes vary by cited study and are not provided in the brief.
Algorithmic credit systems are linked to higher levels of financial stress.
Study reports a positive association between algorithmic credit system use and reported financial stress from regression analysis on the 400-user cross-sectional dataset.
In Chicago, the model shows moderate under-detection of Black residents with DIR equal to 0.22.
Reported DIR value from simulation results on Chicago 2022 data.
These dynamics amplify initial disparities and produce persistent performance gaps across the population.
Main theoretical conclusion of the paper: analysis of the proposed dynamical system showing amplification and persistence of gaps (authors' demonstrated result).
Securitization of economic dependencies—especially in strategic sectors (semiconductors, telecoms, cloud)—frames partner states as security risks and exposes them to blacklists, de-risking campaigns, and sudden loss of market access.
Process tracing of export controls and blacklisting episodes; chronologies of sanction/policy actions affecting firms and partners; policy documents and public lists (e.g., export-control lists). (Data sources: export-control lists, sanction policy documents, corporate/access denials; sample sizes not specified.)
Large-scale AI models have significant energy and resource costs, creating a notable environmental footprint that must be addressed.
Narrative integration of prior empirical studies measuring compute, energy consumption, and embodied emissions of large models (cited literature); the review does not present new quantitative measurements itself.
As AI is deployed in safety-critical domains, reliability, regulation, and human-oriented system design become essential to avoid harms.
Review of literature on safety-critical systems, human–machine interaction studies, and regulatory policy discussions; the paper reports this as a consensus implication rather than presenting new empirical tests.
Problem C is the practical difficulty of attributing responsibility and agency across distributed socio-technical systems (robots, algorithms, institutions, humans).
Conceptual diagnosis developed in the paper and exemplified with vignettes from three application domains; defined as an analytic concept rather than empirically measured.
Provider incentives may be misaligned (e.g., optimizing for engagement or test performance instead of durable learning), requiring contracts, regulation, or purchaser design to align incentives.
Consensus from interdisciplinary workshop (50 scholars) highlighting incentive risks and market-design considerations; descriptive, not empirical.
Extensive learner data needed to personalize AI feedback raises privacy and data-governance concerns (consent, storage, usage).
Qualitative consensus from workshop participants (50 scholars) noting data-collection requirements and governance risks; no empirical governance studies included.
Automated feedback may not capture pedagogical nuances expert teachers use (motivation, socio-emotional cues, complex reasoning), limiting pedagogical fit.
Expert syntheses from the workshop of 50 scholars highlighting limits of automation relative to expert teacher judgment; no empirical comparisons presented.
AI-generated feedback can be incorrect, misleading, or misaligned with learning objectives; assessing feedback quality is nontrivial.
Repeated concern raised across workshop participants (50 scholars) in qualitative synthesis; noted as a substantive risk and open challenge rather than empirically quantified here.
Proactive AI at national scale amplifies concerns around transparency, accountability, privacy, and potential misuse, necessitating robust regulatory and ethical frameworks.
Normative and ethical analysis in the paper, supported by general literature on large-scale AI governance; no empirical assessment of regulatory effectiveness in Russia included.
The article identifies and lays out several concerns regarding the government's approach to regulating AI.
Analytical critique presented in the paper (legal/policy analysis summarizing potential regulatory shortcomings). Based on the author's review and argumentation rather than primary empirical data.
Entrenched societal inequities imply that women and girls are often disproportionately held back from achieving their potential.
Broad claim referencing societal inequities and their effects on women and girls; stated in the introduction without specific empirical citations in the excerpt.
The environmental footprint of healthcare systems is growing and persistent inequities in access and outcomes have intensified calls for procurement reform.
Contemporary literature review and synthesis of sector reports and studies documenting healthcare emissions/footprint and health inequities (no original empirical data reported in this paper).
Ongoing issues remain such as data access, model transparency, ethical concerns, and the varying relevance across Global North and Global South contexts.
Critical synthesis within the review drawing on discussions and critiques in the literature about barriers and ethical challenges; based on reported limitations and regional comparisons in reviewed studies (no numerical breakdown provided).
Ireland exhibits the largest gender gap in advanced digital task use: approximately 44% of men versus 18% of women perform advanced digital tasks — a 26 percentage point gap, close to double the European average.
Country-level descriptive statistics from ESJS for Ireland reporting shares of men and women performing advanced digital tasks. (Exact Irish sample size not provided in the excerpt.)
Across Europe, women are around 15 percentage points less likely than men to perform advanced digital tasks in their jobs.
Empirical analysis of the European Skills and Jobs Survey (ESJS) (Cedefop, 2021) using regression-based estimates and descriptive statistics across European countries. (Exact sample size and country count not provided in the excerpt.)
AI substitutes many routine tasks, including both manual and cognitive/rule-based activities, disproportionately affecting middle-skill occupations.
Task-based substitution reasoning within SBTC framework and cross-sectoral task analysis. The paper provides conceptual synthesis rather than presenting new microdata or quantified task-level estimates.
Nearby business closures increased perceived impediments to growth, amplifying pessimism via local exposure (social contagion effect).
Empirical comparison of perceived impediments to growth across variation in local exposure to nearby business closures (survey measures of local closures correlated with respondents' perceived impediments), using the cross-country survey sample.
Two regimes emerge: an inequality-decreasing regime when AI behaves like a broadly available commodity technology or when labor-market institutions share rents widely (high ξ).
Model regime characterization and calibrated counterfactuals showing falling wage dispersion and ΔGini under commodity-like AI assumptions or higher rent-sharing elasticity.
Generative AI compresses within-task skill differences (reduces dispersion of individual task performance).
Theoretical task-based model and calibrated quantitative simulations (Method of Simulated Moments matching six empirical moments) showing reductions in within-task performance dispersion after introducing AI technology.
Automated compliance and credentialing systems raise governance issues (auditability, appeals mechanisms) and risk incorrect automated deregistration if not properly governed.
Governance and algorithmic-risk discussion in the paper; logical argumentation rather than case-based evidence.
The paper models career progression as a continuous function and treats certification gaps as discontinuities that impede labour-market mobility.
Mathematical/conceptual modeling described in the methods (career-progression-as-continuous-function approach); this is a modeling choice reported in the paper rather than an empirical finding.
There is limited long-term impact evidence and few system-level assessments of AI in developing-country agriculture.
Authors' methodological caveat based on the temporal scope and types of studies available in the >60-study review.