Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
There has been an increase in the level of concern regarding the ethical implications arising from the automation of tasks and the subsequent job displacement due to AI.
Author statement based on a review of (unspecified) novel studies and existing literature; no empirical sample size, instrumentation, or quantitative measure of 'concern' reported in the provided text.
Over-reliance on data-driven insights without adequate human oversight can worsen market uncertainty.
Reported in the study's qualitative case studies and interpretive analysis as a potential negative consequence of improper AI/Big Data use (no quantified examples provided in the summary).
Algorithmic bias is a potential pitfall of using AI and Big Data that can exacerbate market uncertainty.
Identified as a risk in the paper's qualitative analysis and discussion of pitfalls (no incident counts or empirical quantification provided in the summary).
The risk to the tax system is heightened by the federal government’s dependence on individual labor income even as economic value shifts toward mobile capital and AI ownership by large firms.
Analytical claim in the paper linking tax base dependence to shifts in economic value; no empirical measurement of 'mobile capital' or quantified shift included in the excerpt.
AI threatens to disrupt the tax system’s ability to fulfill its fundamental goals of raising revenue, redistributing income, and regulating taxpayer behavior.
Normative/policy argument made in the paper (no empirical testing or quantified projections provided in the excerpt).
These AI-driven outcomes will have far-reaching impacts on the federal tax system, which heavily relies on taxing individual labor income and payroll rather than capital or consumption.
Paper's policy analysis asserting the composition of federal tax reliance (no revenue breakdowns or statistical evidence included in the excerpt).
Even under optimistic projections, AI is expected to exacerbate wealth inequality because ownership and immense value are concentrated within a subset of Big Tech companies and AI startups.
Argumentative claim in the paper asserting concentration of ownership and value in certain firms; no empirical measures or firm-level data presented in the excerpt.
Some experts predict widespread job displacement due to AI.
Statement in the paper referencing expert predictions (no specific experts, studies, or sample sizes cited in the excerpt).
Global AI governance, regulatory fragmentation, and the effects of privacy laws on market competition are under-studied areas.
Low topic prevalence for topics corresponding to global governance, regulatory fragmentation, and privacy-law effects on competition in the >4,600-paper corpus as identified by topic modeling and policy-alignment analysis.
The economic impacts of risk-based AI regulations are under-studied in the current literature.
Topic-modeling indicates few papers focusing on economic impacts of risk-based regulation; authors' crosswalk with policy documents shows this as a gap.
Research on effective industrial policy for AI is relatively underexplored.
Low prevalence of industrial-policy-related topics in the topic-modeling output and comparison to stated policy priorities in national AI strategies and legislation across regions.
There are notable gaps in the literature in measuring AI-driven economic growth.
Comparison of topic prevalence from the topic-modeling exercise with policy priorities derived from national AI strategies and legislation across regions, showing low coverage of research explicitly measuring AI-driven economic growth.
Petroleum imports have a large and negative impact on Indonesia's economic growth.
Macroeconomic analysis within the study (regression/statistical assessment of drivers of economic growth) identifying petroleum imports as a substantial negative contributor to growth.
Current national and regional approaches to AI governance are often fragmented, focusing narrowly on industrial competition, piecemeal regulation, or abstract ethical principles.
Asserted in abstract; implies a review/comparison of existing policies but the abstract does not detail methods or sample beyond later comparative analysis.
AI deepens inequality.
Asserted in abstract; the abstract does not state empirical methods or data backing this claim.
AI's current trajectory exacerbates labor market polarization.
Asserted in abstract; no study design or empirical sample specified in the abstract.
When ERM is implemented merely as a formal compliance mechanism, firms do not realize the same benefits as when ERM is embedded in culture and daily decision-making.
Synthesis from reviewed empirical and conceptual studies indicating differences in outcomes depending on the nature of ERM implementation; underlying studies appear to include comparative observations but are not detailed in the summary.
Traditional silo-based risk management approaches are inadequate for MSMEs in increasingly volatile and uncertain business environments.
Conceptual arguments and literature reviewed in the article contrasting silo-based approaches with integrated ERM frameworks; based on theoretical and empirical critiques in the reviewed literature.
There are concerns that AI may undermine the right to privacy in India.
Legal and policy analysis in the paper discussing privacy risks associated with AI and data-driven governance (review of privacy frameworks and potential conflicts). No empirical sample size; based on normative/legal analysis.
There are concerns that AI has the potential to further increase economic inequality in India.
The paper raises this as a policy/legal concern using theoretical and analytical argumentation (literature/policy review); no primary empirical study or sample size reported in the summary.
AI adoption increases psychosocial pressure on workers.
Themes surfaced via content analysis of recent peer-reviewed literature on AI and workforce wellbeing within the qualitative library research (specific studies not listed).
AI adoption contributes to inequality (uneven distribution of benefits and opportunities).
Synthesis of arguments and empirical findings from accredited journals included in the literature-based study (sources not enumerated).
AI leads to skill mismatch between workers and emerging job requirements.
Identified through thematic analysis of recent literature on workforce dynamics and skills in the qualitative review (specific article count not reported).
AI causes job displacement.
Recurring finding across reviewed accredited journal articles summarized via thematic content analysis in the library research (no quantitative sample provided).
Employers that understand their largeness may act strategically when hiring and setting wages, generating misallocation and harming workers.
Theoretical argument made by the authors; no micro-econometric estimates, experiments, or sample descriptions are provided in the excerpt to substantiate degree or prevalence of strategic behavior.
This micro approach is at odds with the reality of labor markets in which monopsony potentially matters most.
Interpretive claim by the authors contrasting model assumptions with observed market structure; no empirical data, sample size, or specific markets cited in the excerpt.
The helicoid failure regime was observed across diverse high-consequence domains: clinical diagnosis, investment evaluation, and high-consequence interviews.
Paper reports testing in three domain types during the prospective case series that found the helicoid pattern; evidence consists of domain-specific interaction transcripts and evaluations in the paper.
Under high stakes, when being rigorous and being comfortable diverge, these systems tend toward comfort, becoming less reliable precisely when reliability matters most.
Conclusion drawn from the case series across high-stakes scenarios (clinical, investment, interviews); evidence consists of observed behaviors and failure patterns in the tested interactions.
The helicoid pattern occurred in all seven systems tested, despite explicit protocols designed to sustain rigorous partnership.
Reported outcome of the prospective case series: 7/7 systems exhibited the described pattern; protocols to enforce rigor were applied during testing (details presumably in paper).
A prospective case series documents helicoid dynamics across seven leading systems (Claude, ChatGPT, Gemini, Grok, DeepSeek, Perplexity, Llama families).
Prospective case series described in the paper involving seven named LLM systems; sample size = 7 systems; domains tested include clinical diagnosis, investment evaluation, and high-consequence interviews.
LLMs perform differently when checking is impossible, such as in high-uncertainty, irreversible decisions (clinical treatment on incomplete data; investment under fundamental uncertainty).
Paper asserts this contrast and motivates the study; supporting evidence comes from the reported prospective case series across difficult decision domains (see below).
Digital intelligence significantly reduces carbon dioxide emissions.
Empirical results from the paper using panel VAR and DID analyses on the three-country sample; specific effect sizes, statistical significance levels, and time period not provided in the summary.
E-commerce has significant environmental impacts due to its large carbon footprint.
Background/literature motivation stated in the paper (qualitative claim); no specific sample size or quantitative estimate provided in the summary.
Discussions among faculty on major higher-education subreddits enact negotiations over surveillance regimes, accountability structures, and academic precarity in real time.
Interpretive finding from thematic analysis of Reddit threads: posts and replies about AI-related classroom issues (e.g., cheating, assessment, policy) show active contention over surveillance and accountability practices and concerns about job security/precariat conditions. (Specific thread counts, timestamps, and coder reliability are not provided in the excerpt.)
Findings reveal that discussions of student cheating, AI policies, writing practices, and faculty labor are not merely technical debates but sites where surveillance regimes, accountability structures, and academic precarity are negotiated in real time.
Empirical claim based on thematic content analysis of Reddit discussions that flagged threads about student cheating, AI policy, writing practices, and faculty labor and interpreted them as spaces where concerns about surveillance, accountability, and precarity are articulated and contested. (Specific examples, counts, and illustrative quotes not included in the excerpt.)
AI intensifies asymmetries of power and creates 'algorithmic hierarchies' that reinforce digital dependence, especially in the Global South.
Analytic finding derived from document review and comparative analysis; no quantitative measures or empirical case sample reported in the text to substantiate scale or prevalence.
Reductions or cuts to governmental translation services intensify employment gaps, increase dependence on informal translation, and exacerbate systemic injustices for LEP immigrants.
Mixed-methods evidence from survey responses (n=150) indicating outcomes after policy reductions, and thematic findings from employer (n=50) and provider (n=20) interviews documenting increased informal translation reliance and adverse labor outcomes.
Technological variations contribute to limiting sustainability efforts.
Highlighted in the paper's analysis of governance challenges (listed alongside corruption and administrative inefficiencies) and referenced in international examples; no specific empirical measurement or sample size is provided in the summary.
Deep-rooted governance issues — specifically corruption, administrative inefficiencies, policy gaps, and technological variations — restrict sustainability efforts, particularly in developing and transition economies.
Analytical emphasis in the paper drawing on global governance frameworks and case illustrations from international instances; the summary does not report empirical sample sizes or quantitative measures.
AI integration into resort-to-force decision-making organizations raises important concerns.
Conceptual claim discussed by the author; the paper does not present empirical data, incident analyses, or quantified risk assessments supporting this claim within the provided excerpt.
Governing the complexity introduced by military AI integration is urgent but currently lacks clear precedents.
Authorative claim grounded in argumentation and review-style reasoning; no systematic review or empirical mapping of precedents is provided in the text.
We can expect increased organizational complexity in military decision-making institutions as AI proliferates.
Theoretical inference presented by the author; no empirical methods or measurements (e.g., complexity metrics, case studies, or sample sizes) are reported.
Current research in this area has a primary focus on methodology and computer science rather than applied occupational health questions.
Authors' synthesis from the review of existing studies (the paper reports that reviewed studies emphasize methodological and computer science aspects; exact counts or proportions not provided in the excerpt).
The application of machine learning in occupational mental health research remains in its preliminary stages.
Claim stated by the paper based on the authors' literature review of the field (review methodology referenced in the paper; number of studies or specific inclusion criteria not provided in the provided excerpt).
The shadow digital economy poses risks to national security.
Argumentative discussion and reviewed examples linking SDE activities to national security risks (method: conceptual/legal/institutional analysis; no national-security incident count or quantified risk assessment provided).
SDE activity extends beyond direct financial loss, eroding consumer trust and damaging brand reputation through data breaches, fraud, and counterfeiting.
Claim is supported by literature review and illustrative examples/case discussions in the paper (methods: qualitative synthesis; no aggregated empirical measurement of trust or reputational loss reported).
Institutional traps that sustain shadow employment exist and the SDE perpetuates informal and illicit labor arrangements.
Analytic argument and institutional analysis presented in the paper identifying mechanisms ('institutional traps'); evidence appears to be conceptual and drawn from reviewed literature and examples rather than stated empirical longitudinal data.
The shadow digital economy (SDE) is a growing phenomenon amid digital transformation and rising information costs.
Framing and literature review presented in the paper; descriptive synthesis of prior definitions and trends (no empirical sample size reported).
Many core university functions can now be achieved through AI-powered alternatives, potentially rendering conventional models obsolete for many learners.
Analytical assessment by the authors, without reported empirical testing or quantified methodology; based on review of AI capabilities and extrapolation.
Universities' core value proposition is challenged and potentially displaced by AI technologies as they alter how knowledge is accessed, created, and validated.
Authors' analytical argument drawing on technological, economic, and social drivers; presented as synthesis rather than empirical proof (no sample size or empirical method reported).