Evidence (8066 claims)
Adoption
5586 claims
Productivity
4857 claims
Governance
4381 claims
Human-AI Collaboration
3417 claims
Labor Markets
2685 claims
Innovation
2581 claims
Org Design
2499 claims
Skills & Training
2031 claims
Inequality
1382 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 417 | 113 | 67 | 480 | 1091 |
| Governance & Regulation | 419 | 202 | 124 | 64 | 823 |
| Research Productivity | 261 | 100 | 34 | 303 | 703 |
| Organizational Efficiency | 406 | 96 | 71 | 40 | 616 |
| Technology Adoption Rate | 323 | 128 | 74 | 38 | 568 |
| Firm Productivity | 307 | 38 | 70 | 12 | 432 |
| Output Quality | 260 | 71 | 27 | 29 | 387 |
| AI Safety & Ethics | 118 | 179 | 45 | 24 | 368 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 75 | 37 | 19 | 312 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 74 | 34 | 78 | 9 | 197 |
| Skill Acquisition | 98 | 36 | 40 | 9 | 183 |
| Innovation Output | 121 | 12 | 24 | 13 | 171 |
| Firm Revenue | 98 | 35 | 24 | — | 157 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 87 | 16 | 34 | 7 | 144 |
| Inequality Measures | 25 | 76 | 32 | 5 | 138 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 89 | 7 | 4 | 3 | 103 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 33 | 11 | 7 | 98 |
| Wages & Compensation | 54 | 15 | 20 | 5 | 94 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 27 | 26 | 10 | 6 | 72 |
| Job Displacement | 6 | 39 | 13 | — | 58 |
| Hiring & Recruitment | 40 | 4 | 6 | 3 | 53 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 11 | 6 | 2 | 41 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 6 | 9 | — | 27 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Perceived autonomy amplifies the negative effects of perceived algorithmic behavioral constraint on riders' outcomes (i.e., strengthens the adverse impact on mental health and risky riding via work pressure).
Moderation results from SEM and bootstrapping on a sample of 466 Chinese food delivery riders showing interaction between behavioral constraint and perceived autonomy increases negative indirect effects through work pressure.
Perceived autonomy enhances the positive effect of perceived algorithmic standardized guidance in reducing risky riding behavior.
SEM moderation analysis with bootstrapping on data from 466 Chinese food delivery riders showing perceived autonomy strengthens the standardized guidance -> work pressure -> risky riding indirect pathway.
Perceived algorithmic standardized guidance reduces risky riding behavior among food delivery riders by reducing work pressure.
Survey of 466 Chinese food delivery riders analyzed with SEM and bootstrapping showing standardized guidance -> work pressure -> risky riding behavior (indirect effect).
Perceived algorithmic behavioral constraint impairs food delivery riders' mental health through increased work pressure.
Survey of 466 Chinese food delivery riders analyzed via SEM and bootstrapping with work pressure as mediator (behavioral constraint -> work pressure -> mental health).
Perceived algorithmic tracking evaluation impairs food delivery riders' mental health through increased work pressure.
Survey data from 466 Chinese food delivery riders analyzed with structural equation modeling (SEM) and bootstrapping; work pressure modeled as mediator based on the Job Demands-Resources (JD-R) framework; indirect effect from tracking evaluation -> work pressure -> mental health reported.
Global AI governance, regulatory fragmentation, and the effects of privacy laws on market competition are under-studied areas.
Low topic prevalence for topics corresponding to global governance, regulatory fragmentation, and privacy-law effects on competition in the >4,600-paper corpus as identified by topic modeling and policy-alignment analysis.
The economic impacts of risk-based AI regulations are under-studied in the current literature.
Topic-modeling indicates few papers focusing on economic impacts of risk-based regulation; authors' crosswalk with policy documents shows this as a gap.
Research on effective industrial policy for AI is relatively underexplored.
Low prevalence of industrial-policy-related topics in the topic-modeling output and comparison to stated policy priorities in national AI strategies and legislation across regions.
There are notable gaps in the literature in measuring AI-driven economic growth.
Comparison of topic prevalence from the topic-modeling exercise with policy priorities derived from national AI strategies and legislation across regions, showing low coverage of research explicitly measuring AI-driven economic growth.
Short-run labor market disruptions raise concerns regarding wage inequality and workforce adaptation.
Claims based on observed short-run labor market adjustments in publicly available data and theoretical implications for inequality and adaptation; specific empirical measures, time horizons, and sample sizes are not reported in the excerpt.
AI simultaneously increases adjustment pressures for routine tasks.
Argument and cited observations from publicly available labor market data indicating displacement or adjustment in routine-task-intensive occupations (no specific empirical estimates or samples provided).
The Cautious are held in organizational stasis: without early adopter examples they don't enter the virtuous adoption cycle, never accumulate the usage frequency that drives intent, and never attain high efficacy.
Comparative analysis of archetype subgroups in the survey (N=147) showing the 'Cautious' group has lower reported usage frequency, lower intent to increase usage, and lower self-reported efficacy relative to 'Enthusiasts' and 'Pragmatists'.
Adoption of AI testing tools lags that of coding tools, creating a 'Testing Gap'.
Within-sample comparison of reported adoption rates for coding-oriented AI tools versus testing-oriented AI tools among 147 developers, showing lower adoption for testing tools.
Security concerns remain a moderate and statistically significant barrier to adoption.
Survey-derived security-concern metric (N=147) that shows a statistically significant negative association with future adoption intention (reported as moderate in effect size).
Petroleum imports have a large and negative impact on Indonesia's economic growth.
Macroeconomic analysis within the study (regression/statistical assessment of drivers of economic growth) identifying petroleum imports as a substantial negative contributor to growth.
Current national and regional approaches to AI governance are often fragmented, focusing narrowly on industrial competition, piecemeal regulation, or abstract ethical principles.
Asserted in abstract; implies a review/comparison of existing policies but the abstract does not detail methods or sample beyond later comparative analysis.
AI deepens inequality.
Asserted in abstract; the abstract does not state empirical methods or data backing this claim.
AI's current trajectory exacerbates labor market polarization.
Asserted in abstract; no study design or empirical sample specified in the abstract.
When ERM is implemented merely as a formal compliance mechanism, firms do not realize the same benefits as when ERM is embedded in culture and daily decision-making.
Synthesis from reviewed empirical and conceptual studies indicating differences in outcomes depending on the nature of ERM implementation; underlying studies appear to include comparative observations but are not detailed in the summary.
Traditional silo-based risk management approaches are inadequate for MSMEs in increasingly volatile and uncertain business environments.
Conceptual arguments and literature reviewed in the article contrasting silo-based approaches with integrated ERM frameworks; based on theoretical and empirical critiques in the reviewed literature.
Traditional human resource management (HRM) approaches in hospitals rely on manual processes that are prone to errors, lack adaptability, and fail to adequately balance staff preferences with patient care requirements.
Background/positioning statement in the paper; asserted based on literature and authors' motivation for proposing an AI-driven framework (no specific dataset or quantitative analysis provided for this claim).
There are concerns that AI may undermine the right to privacy in India.
Legal and policy analysis in the paper discussing privacy risks associated with AI and data-driven governance (review of privacy frameworks and potential conflicts). No empirical sample size; based on normative/legal analysis.
There are concerns that AI has the potential to further increase economic inequality in India.
The paper raises this as a policy/legal concern using theoretical and analytical argumentation (literature/policy review); no primary empirical study or sample size reported in the summary.
AI adoption increases psychosocial pressure on workers.
Themes surfaced via content analysis of recent peer-reviewed literature on AI and workforce wellbeing within the qualitative library research (specific studies not listed).
AI adoption contributes to inequality (uneven distribution of benefits and opportunities).
Synthesis of arguments and empirical findings from accredited journals included in the literature-based study (sources not enumerated).
AI leads to skill mismatch between workers and emerging job requirements.
Identified through thematic analysis of recent literature on workforce dynamics and skills in the qualitative review (specific article count not reported).
AI causes job displacement.
Recurring finding across reviewed accredited journal articles summarized via thematic content analysis in the library research (no quantitative sample provided).
Simulations project measurable reductions in defect rates under AI-HRM scenarios.
Regression-based simulations of the counterfactual model include defect reduction as an organizational outcome and project decreases in defect rates when HR processes are AI-supported.
Simulations show notable reductions in absenteeism under the AI-HRM scenario.
Predictive estimation and regression-based simulations projecting absenteeism rates under counterfactual AI-supported HR processes using the industrial firm dataset.
Employers that understand their largeness may act strategically when hiring and setting wages, generating misallocation and harming workers.
Theoretical argument made by the authors; no micro-econometric estimates, experiments, or sample descriptions are provided in the excerpt to substantiate degree or prevalence of strategic behavior.
This micro approach is at odds with the reality of labor markets in which monopsony potentially matters most.
Interpretive claim by the authors contrasting model assumptions with observed market structure; no empirical data, sample size, or specific markets cited in the excerpt.
The helicoid failure regime was observed across diverse high-consequence domains: clinical diagnosis, investment evaluation, and high-consequence interviews.
Paper reports testing in three domain types during the prospective case series that found the helicoid pattern; evidence consists of domain-specific interaction transcripts and evaluations in the paper.
Under high stakes, when being rigorous and being comfortable diverge, these systems tend toward comfort, becoming less reliable precisely when reliability matters most.
Conclusion drawn from the case series across high-stakes scenarios (clinical, investment, interviews); evidence consists of observed behaviors and failure patterns in the tested interactions.
The helicoid pattern occurred in all seven systems tested, despite explicit protocols designed to sustain rigorous partnership.
Reported outcome of the prospective case series: 7/7 systems exhibited the described pattern; protocols to enforce rigor were applied during testing (details presumably in paper).
A prospective case series documents helicoid dynamics across seven leading systems (Claude, ChatGPT, Gemini, Grok, DeepSeek, Perplexity, Llama families).
Prospective case series described in the paper involving seven named LLM systems; sample size = 7 systems; domains tested include clinical diagnosis, investment evaluation, and high-consequence interviews.
LLMs perform differently when checking is impossible, such as in high-uncertainty, irreversible decisions (clinical treatment on incomplete data; investment under fundamental uncertainty).
Paper asserts this contrast and motivates the study; supporting evidence comes from the reported prospective case series across difficult decision domains (see below).
The number of granted AI-related patents is negatively associated with GDP growth in the model.
Panel econometric analysis using OLS, Fixed Effects, Difference GMM and System GMM estimators; AI innovation proxied by the number of granted AI-related patents; reported negative association across the applied estimators (sample of countries and time span not specified in the provided summary).
Digital intelligence significantly reduces carbon dioxide emissions.
Empirical results from the paper using panel VAR and DID analyses on the three-country sample; specific effect sizes, statistical significance levels, and time period not provided in the summary.
E-commerce has significant environmental impacts due to its large carbon footprint.
Background/literature motivation stated in the paper (qualitative claim); no specific sample size or quantitative estimate provided in the summary.
Discussions among faculty on major higher-education subreddits enact negotiations over surveillance regimes, accountability structures, and academic precarity in real time.
Interpretive finding from thematic analysis of Reddit threads: posts and replies about AI-related classroom issues (e.g., cheating, assessment, policy) show active contention over surveillance and accountability practices and concerns about job security/precariat conditions. (Specific thread counts, timestamps, and coder reliability are not provided in the excerpt.)
Findings reveal that discussions of student cheating, AI policies, writing practices, and faculty labor are not merely technical debates but sites where surveillance regimes, accountability structures, and academic precarity are negotiated in real time.
Empirical claim based on thematic content analysis of Reddit discussions that flagged threads about student cheating, AI policy, writing practices, and faculty labor and interpreted them as spaces where concerns about surveillance, accountability, and precarity are articulated and contested. (Specific examples, counts, and illustrative quotes not included in the excerpt.)
AI intensifies asymmetries of power and creates 'algorithmic hierarchies' that reinforce digital dependence, especially in the Global South.
Analytic finding derived from document review and comparative analysis; no quantitative measures or empirical case sample reported in the text to substantiate scale or prevalence.
Reductions or cuts to governmental translation services intensify employment gaps, increase dependence on informal translation, and exacerbate systemic injustices for LEP immigrants.
Mixed-methods evidence from survey responses (n=150) indicating outcomes after policy reductions, and thematic findings from employer (n=50) and provider (n=20) interviews documenting increased informal translation reliance and adverse labor outcomes.
As AI adoption rises, demand for substitutable skills—such as summarisation, translation, or customer service—decreases.
Analysis of the same job postings dataset (2018–2024) linking measures of AI diffusion at company/industry/region level to changes in frequency of mentions of substitutable skills (examples: summarisation, translation, customer service).
Technological variations contribute to limiting sustainability efforts.
Highlighted in the paper's analysis of governance challenges (listed alongside corruption and administrative inefficiencies) and referenced in international examples; no specific empirical measurement or sample size is provided in the summary.
Deep-rooted governance issues — specifically corruption, administrative inefficiencies, policy gaps, and technological variations — restrict sustainability efforts, particularly in developing and transition economies.
Analytical emphasis in the paper drawing on global governance frameworks and case illustrations from international instances; the summary does not report empirical sample sizes or quantitative measures.
AI integration into resort-to-force decision-making organizations raises important concerns.
Conceptual claim discussed by the author; the paper does not present empirical data, incident analyses, or quantified risk assessments supporting this claim within the provided excerpt.
Governing the complexity introduced by military AI integration is urgent but currently lacks clear precedents.
Authorative claim grounded in argumentation and review-style reasoning; no systematic review or empirical mapping of precedents is provided in the text.
We can expect increased organizational complexity in military decision-making institutions as AI proliferates.
Theoretical inference presented by the author; no empirical methods or measurements (e.g., complexity metrics, case studies, or sample sizes) are reported.
These findings challenge optimistic narratives of seamless workforce adaptation and demonstrate that emerging economies require active pathway creation, not passive skill matching.
Synthesis and interpretation of the quantitative results from the knowledge graph analysis (percent at risk, percent with viable pathways, number of feasible transitions, skill-leverage findings) used to draw policy implications about workforce adaptation strategies.