Evidence (3062 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Human Ai Collab
Remove filter
AI is not an unprecedented disruption; its effects can be situated within established economic frameworks related to automation and task substitution.
Conceptual analysis comparing recent AI developments to historical automation and task-substitution frameworks; empirical grounding claimed via publicly available labor market and productivity data (details not provided).
Three developer archetypes are present: Enthusiasts, Pragmatists, and Cautious.
Classification/typology derived from the study's survey data of 147 developers (e.g., cluster analysis or thematic grouping) identifying three distinct groups based on usage patterns, attitudes, and intent.
Improvements in caseworker accuracy level off as chatbot accuracy increases (an "AI underreliance plateau").
Observed pattern in experimental results: incremental gains in caseworker accuracy diminish at higher chatbot accuracies, described by authors as an 'AI underreliance plateau' (specific curves or thresholds not in the excerpt).
AI alters job structures, workflow patterns, and human roles in decision-making processes.
Thematic content analysis of recent accredited journal literature as part of the qualitative library research (sources not enumerated).
AI is fundamentally transforming the workplace by creating new opportunities, intensifying challenges, and redefining professional skills.
Qualitative library research: systematic documentation and thematic content analysis of recent accredited journal sources (number of sources not specified).
Contextual and technological factors (work environment and digital/AI intensity) enhance human-centered capabilities but do not substitute for them.
Authors state these factors were included as contextual moderators in the analysis and that results indicate they enhance but do not replace emotional/psychological predictors. The excerpt does not include moderator effect sizes, sample size, or statistical tests.
When confronted about the repeating failure, the systems attributed its persistence to structural factors in their training that are beyond what conversation can reach.
Observation from the case series: model responses/self-reports during testing attributed persistent failure to training/structural causes; evidence is conversational transcript analysis.
Variations in prompt design influenced agents’ performance indicators, including response accuracy, task completion efficiency, coordination coherence, and error rates.
Experimental simulations with systematic variation of prompt designs and quantitative analysis of resulting performance indicators listed above. (Sample size, effect sizes, and statistical tests not specified in the provided excerpt.)
AI is not simply replacing tasks or only requiring more AI developer skills; it may be transforming workforce skill requirements to favor human attributes that enhance collaboration with intelligent systems.
Synthesis of the three empirical findings above (higher prevalence of complementary non-technical skills in AI roles, wage premiums for those skills, and spillover increases in complementary-skill demand alongside decreases in substitutable skills) based on analysis of ~30 million job postings (2018–2024).
Knowledge democratization through AI may reduce educational inequality but may also exacerbate digital divides and erode universities' social mobility function.
Theoretical and socio-political analysis considering opposing effects; framed as a conditional/mixed outcome without empirical measurement reported in the paper.
AI displacement potential varies substantially across university functions.
Summary finding from the paper's comparative analysis of university functions; the paper provides ranked/percent estimates but does not report empirical sampling or statistical testing.
The impact of AI on supply chain stability in sports enterprises exhibits heterogeneity by enterprise type and profitability status.
Heterogeneity/subgroup analyses within the DML panel estimations (sample of 45 listed SEs, 2012–2023) showing differential AI effects across firm types and across firms with different profitability profiles.
There is significant variation in psychological readiness for AI across generational cohorts, industry sectors, and organizational maturity levels.
Aggregated findings from emerging AI–HRM empirical studies referenced in the paper (no specific study counts or sample sizes provided in the summary).
In a 2021 national labor survey, no single task was automated by more than 57% of respondents, compared with a maximum of 52% in the mid-2000s.
National labor survey results (mid-2000s vs 2021) as reported in the paper; survey details and sample size are not included in the excerpt.
Exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants' prior disposition or direct experience.
Information-exposure treatment within the same experimental design; attitudinal outcomes measured in the three-wave panel showed statistically significant change following information exposure, including among participants whose prior disposition or direct AI-as-boss experience would predict resistance.
Personal experience with an AI 'boss' affected workers' job performance.
Randomized experiment described in the paper: over 1,500 workers were randomly assigned to task supervision by either an AI or a human 'boss' (task content and valence also randomized), with job performance measured across a three-wave panel.
Selection of human-LLM interaction archetype can influence LLM outputs and decisions.
Findings from the evaluation across clinical diagnostic cases (empirical comparison of archetypes' effects on outputs and decisions). Specific experimental details and sample size are not provided in the abstract.
We evaluate these diverse archetypes across real-world clinical diagnostic cases to examine the potential effects of adopting distinct human-LLM archetypes on LLM outputs and decision outcomes.
Empirical evaluation described in the paper using real-world clinical diagnostic cases. Method: application of archetypes to clinical cases and comparison of resulting LLM outputs and decisions. Sample size and specific case details are not provided in the abstract.
Each category of AI trigger presents distinct avenues for value creation alongside significant risks.
Analytical argument in the paper discussing potential benefits and risks per trigger type. No empirical evaluation, case studies, or quantitative evidence reported here.
AI is transforming jobs that are technical in nature.
Asserted in the paper's conceptual discussion of dual impacts; presented without empirical measurement or reported sample data in this paper.
The study clarifies the interplay between perceived usefulness, trust, and ethical design, offering insights into responsible AI implementation to empower consumers.
Authors' reported contribution combining empirical SEM findings (linking perceived usefulness and trust to outcomes) with normative discussion on ethical design; specific empirical mediation/moderation tests not detailed in the summary.
In the sentiment-analysis task, individual differences in user characteristics shape how users respond to AI explanations.
Results from the preregistered sentiment-analysis experiment reported in the paper indicating interaction effects between user characteristics and explanation types. (Exact sample size and statistical details not provided in the excerpt.)
Data maturity, ethical governance of algorithms, and industry type shape business performance in AI-augmented workflows.
Moderator/subgroup analyses and qualitative synthesis across the reviewed studies indicating these contextual factors influence outcomes; based on the 85-publication review.
Most moderators tested in the analyses have a considerable influence on the relationship between AI use and business performance.
Moderator analyses reported in the meta-analysis (unspecified number of moderators) across the sample of reviewed studies (n=85).
A consistent finding is that implementation outcomes are determined by institutional conditions rather than algorithmic performance.
Synthesis across the 81 reviewed sources indicating recurring patterns where institutional factors (governance, reimbursement, workforce, regulations) drive implementation success more than raw algorithmic accuracy. Specific studies supporting this pattern are not named in the abstract.
The fast spread of artificial intelligence (AI) in U.S. organizations has radically altered the managerial decision-making process.
Statement based on a conceptual research design and integration of interdisciplinary literature (literature review). No empirical sample or quantitative data reported.
The increasing integration of artificial intelligence (AI) into organizational decision-making has fundamentally reshaped how managers analyze information, evaluate alternatives, and exercise judgment.
Synthesis of interdisciplinary literature presented in this conceptual meta-analysis; no primary empirical sample or quantitative effect sizes reported in the abstract (literature review basis).
Progressing from ChatGPT 3.5 to 4.0 produced three distinct effect scenarios across markets, which reinforce the paper's inflection point conjecture.
Empirical comparison/analysis of the effects associated with different ChatGPT versions (3.5 vs 4.0) on online labor markets; method implied to be similar DiD or temporal comparison. (Specific sample sizes and the definitions of the three scenarios are not provided in the abstract.)
The authors developed a Cournot competition model that identifies an inflection point for each market: before this point human workers benefit from AI enhancements; beyond this point human workers would be replaced.
Theoretical modeling via a Cournot competition framework constructed by the authors to characterize market dynamics and derive an inflection point; this is a model-based (analytical) result rather than an empirical estimate.
AI adoption rates differ across countries and firm sizes.
Descriptive/empirical comparisons using AI diffusion indicators and firm-level data from the four named Central and Eastern European countries; heterogeneity by firm size reported.
AI productivity effects are not direct but conditional on organizational readiness.
Empirical analysis of firm-level data covering Serbia, Croatia, Czechia, and Romania combined with AI diffusion indicators; conditional/interaction analysis implied by framing (paper reports that productivity effects depend on organizational factors).
Although the asymmetry in who benefits does not preclude beneficial entry, it raises strategic issues for deployment of AI technology in multiagent settings.
Interpretation and discussion in the paper based on the mixed-population results and observed payoff asymmetries; this is a normative/strategic claim rather than an empirical measurement.
In some parameter regimes, non-adopters may benefit disproportionately from the cooperation induced by adopters (i.e., non-adopters can free-ride on adopter-induced coordination).
Parameter-regime analysis of mixed populations reported in the paper indicating payoff asymmetries between adopters and non-adopters; specific parameter ranges and quantitative results are not provided in the abstract.
AI readiness emerged as both an opportunity and a source of uncertainty for workers.
Analyses of survey responses about AI readiness and perceptions showed mixed patterns—some respondents view AI competence as enabling optimism/advancement while others report uncertainty—based on the 5,000-worker and 501-employer data.
Smaller models augmented with curated Skills can match the performance of larger models without Skills (model–skill tradeoff).
Cross-size performance comparisons reported across seven agent–model configurations showing that certain smaller model + curated-Skill pairings achieve pass rates comparable to larger model baselines without Skills. Analysis uses the SkillsBench trajectories (7,308 total) to support tradeoff claims.
Firms with better data infrastructure and higher initial IT investment will adopt AI faster, potentially widening performance gaps across firms and industries.
Theory-informed assertion and literature synthesis; no empirical heterogeneity analysis is specified in the abstract.
Complementarity between AI and skilled accountants may raise wages for analytical roles while compressing demand for routine clerical roles, contributing to wage polarization.
Prediction grounded in economic theory and prior literature; the paper does not report direct wage-change estimates in the abstract.
AI will automate routine accounting tasks, reducing demand for low-skill bookkeeping work while increasing demand for higher-skilled roles (data interpretation, advising, oversight), creating occupational reallocation and upskilling needs.
Projection based on task-based labor economics literature and the paper's synthesis; not supported by specific longitudinal labor-market estimates in the abstract.
Generative AI can play a bounded, auditable role as multilingual, low‑bandwidth learning support, but must be governed to avoid digital gatekeeping and should be excluded from eligibility screening, risk scoring, or automated decision‑making.
Analytical assessment of AI's potential roles and risks in training delivery; governance prescriptions based on policy and risk reasoning rather than empirical AI evaluations in the corridor.
Proposition 3: Rights‑based effectiveness requires measurable capability outcomes and institutional follow‑through (beyond information transfer).
Normative and governance analysis based on gap mapping and the paper's empirical agenda; not tested with outcome data in this study.
Training can be treated as migration-governance infrastructure that functions simultaneously as a capability intervention (actionable navigation, contract comprehension, safe help‑seeking), a labour‑market signal when aligned with TVET/human-capital planning, and a potential gatekeeping node if access, assessment, and accountability are weak.
Conceptual reframing supported by policy analysis and governance gap mapping; no empirical validation provided in the paper.
Students use GenAI as a co-designer and idea generator, which modifies workflow, decision points, and evaluative practices in their design process.
Qualitative interview data from architecture students; thematic analysis surfaced accounts of GenAI being used for ideation, variant generation, and as a collaborative partner (N unspecified).
Collaboration between architecture students and generative AI reshapes creative cognition in the architectural design process through algorithmic thinking strategies.
Semi-structured interviews with architecture students (interview sample size not specified) analyzed via inductive thematic analysis; authors synthesize recurring themes linking GenAI use to changes in cognitive strategies.
Fidelity gains from prompt engineering, model selection, or participant/environment modeling have been limited and context-dependent.
Synthesis of studies that tested prompt/model/participant modeling interventions and reported mixed or modest fidelity improvements; aggregated conclusion in the review.
Heterogeneous program design and outcome measurement limit purchasers' ability to identify high‑value AI education offerings, creating a market opportunity but also risk.
Observed variability in program length, setting, content focus, target audience, and evaluation methods across the 27 included programs as reported in the review.
The predominant focus on entry‑level trainees suggests future workforce increases in basic AI literacy but leaves current mid‑career clinicians undertrained, potentially slowing adoption and creating heterogeneous skill premiums.
Distribution of target audiences and career stages in the 27 programs (56% entry‑to‑practice; many targeted students/early practitioners) and interpretation in the paper about labor market implications.
Compliance costs and audit requirements create regulatory barriers to entry but also incentives for standardized metadata and interoperable systems; policy can encourage open standards to reduce lock-in.
Policy analysis and recommendation in paper (theoretical); no regulatory cost quantification provided.
Algorithmic lesson planning, automated audits, and data-driven competency mapping are natural targets for AI augmentation and can reduce recurring resource burdens but require quality-labelled data, strong governance, and transparency.
Paper's discussion of AI complementarity (conceptual); no implementation trials or performance metrics presented.
The taxonomy clarifies where substitution versus complementarity are likely: AI-assisted tasks imply partial substitution of routine work; AI-augmented applications generate complementarities that increase demand for higher cognitive skills; AI-automated systems shift labor toward monitoring, exception handling, and governance.
Inference from mapping the three interaction levels to observed case features (n=4) and application of the Bolton et al. framework in cross-case synthesis.
AI-augmented systems support real-time medical tasks (e.g., decision support during procedures), amplifying human judgment and speed but raising required cognitive skills and changing training and coordination practices.
Findings from the case(s) labeled AI-augmented in the four-case qualitative sample and cross-case interpretive analysis using the service-innovation framework.