Evidence (2954 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Human Ai Collab
Remove filter
Trained on unprecedented volumes of human-produced text, LLMs encode large-scale regularities in how people argue, justify, narrate, and negotiate norms across social domains.
Inference based on known pretraining procedures for LLMs and the paper's theoretical account; no specific corpus size or empirical validation reported in the provided text.
There is a third, emerging ambition in AI research: using large language models (LLMs) as scientific instruments for studying human behavior, culture, and moral reasoning.
Argumentative proposal grounded in the paper's conceptual analysis and review of existing methodological work; framed as an emerging research program rather than demonstrated empirical fact.
Vocational graduates who undergo strong work-based training demonstrate competitive and sometimes superior long-term employment trajectories compared with other pathways.
Comparative empirical studies and secondary analyses referenced in the paper that link work-based vocational training to favorable long-term outcomes (the summary does not provide exact studies, effect sizes, or sample sizes).
Higher education graduates generally experience favorable employment outcomes.
Synthesis of prior empirical studies and secondary labor-market indicators cited in the paper indicating better employment prospects for higher education graduates (no specific effect sizes or sample n given in the summary).
There has been substantial growth in higher education attainment across the countries examined.
Descriptive results drawn from secondary data and comparative empirical studies documenting trends in higher education enrollment and attainment (paper does not report specific country list or sample sizes in the summary).
The findings provide practical guidance for entrepreneurs on building adaptive, AI-integrated organizations by redefining hiring, decision processes, and learning practices.
Prescriptive recommendations derived from the interview analysis and observed patterns in the sample of entrepreneurs (qualitative grounding; specific examples or measured impacts not provided in the excerpt).
Hybrid decision architectures have emerged: startup-specific configurations where algorithmic reasoning and human judgment recursively interact to shape decisions, roles and routines.
Thematic synthesis of interview data identifying recurring patterns of human–AI recursive interaction in decision-related practices across the studied startups (qualitative evidence; no quantitative counts reported).
Entrepreneurs who founded startups after ChatGPT's release integrated AI into their post-release ventures.
Direct accounts from the subset of interviewees who founded startups after ChatGPT's release describing AI incorporation in those ventures (qualitative interview evidence; sample details not given).
AI is becoming embedded in the architecture of startups rather than serving only as a task-automation tool.
Interview data and qualitative analysis identifying patterns of AI integration across startup roles, routines and structures (derived from the same semi-structured interview sample; exact N not provided).
Facilitated access to AI following the release of ChatGPT is transforming how startups organize and make decisions.
Qualitative study using semi-structured interviews with entrepreneurs who founded startups both before and after ChatGPT's release and who integrated AI into their post-release ventures; thematic/qualitative analysis of interview data. (Sample size not reported in the provided excerpt.)
Perceived autonomy enhances the positive effect of perceived algorithmic standardized guidance on riders' outcomes (i.e., strengthens the beneficial impact on mental health and reduction in risky riding via work pressure).
Interaction/moderation effects tested via SEM on 466 Chinese food delivery riders; results reported that perceived autonomy amplifies the beneficial pathways from standardized guidance.
Perceived autonomy mitigates (buffers) the negative effect of perceived algorithmic tracking evaluation on risky riding behavior (i.e., reduces the tendency toward risky riding driven by tracking evaluation via work pressure).
Moderation analysis within SEM using sample of 466 Chinese delivery riders with bootstrapped tests for interaction effects between tracking evaluation and perceived autonomy.
Perceived autonomy mitigates (buffers) the negative effect of perceived algorithmic tracking evaluation on riders' outcomes (i.e., reduces the adverse impact on mental health and risky riding via work pressure).
Moderation tested in SEM on data from 466 Chinese food delivery riders; interaction effects reported indicating perceived autonomy weakens the negative pathways from tracking evaluation.
Perceived algorithmic standardized guidance improves food delivery riders' mental health by reducing work pressure.
466 Chinese food delivery riders; SEM and bootstrapping testing mediation (standardized guidance -> work pressure -> mental health) within JD-R framework.
Perceived algorithmic behavioral constraint promotes risky riding behavior among food delivery riders through increased work pressure.
Data from 466 Chinese food delivery riders; mediation tested using SEM and bootstrapping showing behavioral constraint -> work pressure -> risky riding behavior.
Perceived algorithmic tracking evaluation promotes risky riding behavior among food delivery riders through increased work pressure.
Survey data from 466 Chinese food delivery riders; SEM and bootstrapping used to test mediation (tracking evaluation -> work pressure -> risky riding behavior).
Algorithms now surpass human capability in processing speed, pattern recognition and data-driven decision-making.
Asserted in the paper's opening claims as a general factual premise; grounded in the paper's literature grounding but no original empirical tests or sample reported.
Education, reskilling, and institutional responses are important in shaping the economic outcomes of artificial intelligence.
Policy implication derived from the observed/modeled heterogenous effects of AI on occupations and productivity; presented as a normative recommendation rather than an empirically tested result in the provided text.
Productivity gains associated with AI may support long-term economic growth.
Reference to productivity data and growth theory linking productivity improvements to long-run growth; the paper states this as a potential outcome but does not provide quantified long-run estimates or empirical identification in the excerpt.
AI complements higher-skill labor.
Interpretation of labor market data patterns and theoretical task-complementarity arguments presented in the paper; empirical details (which datasets, estimation strategy, sample size) are not provided in the text excerpt.
Artificial intelligence is a skill-biased technological innovation.
Framing and argumentation in the paper situating AI within the skill-biased technical change literature; references to analyses of publicly available labor market and productivity data (sources, time periods, and sample sizes not specified in the text).
Big Data Analytics and AI can improve audit accuracy and reduce costs.
Reported results from literature review and empirical analysis in the study; precise cost or accuracy metrics and sample information are not provided in the abstract.
Integrating BDA and AI within the Audit 5.0 framework represents a fundamental shift toward intelligent, adaptive, and value-driven auditing, while underscoring the need for enhanced auditor competencies and alignment with evolving regulatory and professional requirements.
Overall synthesis of literature and empirical results from the mixed-method study (systematic review + SEM-based empirical analysis in finance and technology sectors); phrased as a high-level conclusion.
There is a need for stronger governance, ethical frameworks, and targeted training to fully realize the benefits of digital auditing.
Conclusions drawn from the literature synthesis and empirical observations regarding challenges to implementing Audit 5.0; recommendation rather than a measured effect.
BDA and AI enable real-time and predictive risk assessment and enhanced fraud detection, expanding audit coverage beyond traditional sampling.
Synthesis of prior theoretical and empirical studies and the study's empirical analysis (SEM) focusing on risk assessment, anomaly detection, and continuous auditing in finance and technology sectors.
Investment in AI correlates with improved audit efficiency.
Reported empirical correlations from the study's analysis (SEM) combined with literature review; detailed metrics and sample information not included in the abstract.
Investment in AI correlates with reductions in audit restatements.
Empirical evidence cited in the study (SEM-based analysis across organizations in finance and technology); exact sample size and statistical coefficients not provided in the summary.
BDA and AI facilitate continuous auditing (real-time auditing).
Synthesis of prior literature and empirical analysis within Audit 5.0 framework; methods include systematic literature review and SEM on sectoral samples (finance and technology).
Digitalization (BDA and AI) improves audit productivity.
Empirical analysis (SEM) and literature synthesis focused on finance and technology organizations; empirical details (sample size, effect sizes) not given in the summary.
Audits supported by Big Data Analytics (BDA) and artificial intelligence (AI) significantly outperform traditional audit approaches.
Mixed-method research: systematic literature review plus empirical analysis using structural equation modeling (SEM) on organizations in the finance and technology sectors (sample size not reported in the provided text).
High current usage, breadth of application, frequent use of AI tools for testing, and ease of use correlate strongly with future intended adoption.
Correlational/regression analyses of survey variables (N=147) predicting respondents' stated future intention to increase AI tool use from measures of current usage, breadth of tool applications, frequency of testing-tool use, and perceived ease-of-use.
Developers report both productivity and quality gains from using AI tools.
Aggregate self-reported responses from 147 professional developers indicating perceived improvements in productivity and code quality associated with AI tool use.
There is no perceptual support for the Quality Paradox; PP is positively correlated with Perceived Code Quality (PQ) improvement.
Statistical analysis of survey measures (N=147) showing a positive correlation between respondents' Perceived Productivity scores and their Perceived Code Quality improvement scores; absence of evidence for a negative PP–quality relationship.
Frequent and broad AI tools use are the strongest correlates of both Perceived Productivity (PP) and quality, with frequency strongest.
Correlational analysis of self-reported survey responses from a sample of 147 professional developers measuring AI tool usage frequency and breadth and perceived outcomes (Perceived Productivity and Perceived Code Quality).
Adopting a standardised yet flexible approach to incentive design can help produce more reliable and generalizable knowledge in human–AI decision-making research.
Authors' argument/recommendation based on their thematic review and the proposed framework (this is a normative claim; no empirical validation provided in excerpt).
Human judgement remains paramount for high-stakes decision-making.
Assertion in the paper framing the motivation for human–AI collaboration research (based on prior literature and domain practice; no specific empirical data or sample sizes provided in excerpt).
AI has revolutionised decision-making across various fields.
Statement in paper's introduction summarizing prior work and trends (literature-level claim; no specific studies or sample sizes provided in excerpt).
Overall, the framework improves efficiency, fairness, and quality of care in hospital workforce management.
Aggregate conclusion drawn from experiments (forecasting metrics, scheduling conflict/fairness improvements, performance evaluation results, stress tests, and pilot deployment outcomes) described in the paper.
Pilot deployments of the framework demonstrated tangible benefits, including an 18% reduction in patient waiting times and a 14% improvement in satisfaction scores.
Reported outcomes from pilot deployments (real-world trials); the number of pilot sites, duration, patient/sample sizes, and baseline comparison methodology are not detailed in the provided text.
Stress tests confirmed scalability: solver times remained under 95 seconds for instances with 1,000 staff members.
Scalability/stress testing reported in the paper using scheduling solver on problem instances with up to 1,000 staff; hardware and solver configuration not specified in the excerpt.
The performance evaluation framework analysis revealed 74% positive patient feedback.
Reported result from NLP analysis of patient surveys in the experiments; the number of patient survey responses and timeframe are not provided in the excerpt.
The intelligent staff scheduling module reduces scheduling conflicts by 41% compared to conventional methods while improving fairness (Gini coefficient = 0.08).
Results from scheduling optimization experiments reported in the paper; comparison against unspecified 'conventional methods'; specific experimental sample sizes (number of staff/rosters used for the comparison) not provided in the excerpt.
Workforce demand forecasting using LSTM, XGBoost, and Random Forest models predicts patient admissions and staffing needs, with LSTM achieving the best performance (MAE = 6.1, R2 = 0.91).
Experimental comparison of ML models on synthetic and real hospital datasets; reported forecasting metrics MAE and R2 for LSTM (other models' metrics not quoted in the provided text). The specific dataset size and train/test splits are not reported in the excerpt.
Hybrid professional competencies — combining digital and AI literacy, transversal (soft) skills, and ethical oversight capabilities — are necessary in AI-driven environments.
Consolidated finding from accreditation journal sources analyzed via thematic content analysis in the qualitative library research (number and identity of sources not specified).
Sustainable adaptation to AI requires continuous upskilling and reskilling ecosystems supported by organizations and policymakers.
Recommendation drawn from thematic synthesis of policy and organizational literature reviewed in the study (qualitative review; no quantified samples provided).
AI supports innovative work models such as human–AI collaboration.
Thematic synthesis of journal sources discussing AI adoption and work models in the qualitative library research (number of sources unspecified).
AI increases productivity.
Consolidated evidence from recent peer-reviewed studies included in the qualitative literature review (specific studies and sample sizes not listed).
AI generates new job categories.
Synthesis of findings from accredited journal articles reviewed in the library research (study design: literature analysis; sample size of articles not provided).
AI-supported HR processes would have produced measurable increases in output per worker (labor productivity).
Counterfactual simulations and predictive estimates from the industrial firm dataset projecting output per worker under AI-HRM scenarios.
AI-HRM would have led to better alignment between training and production needs (improved targeting of training intensity to production requirements).
Model links training intensity to production outcomes and projects improved training–production alignment under AI-supported HR processes via regression-based simulations. (Quantitative magnitudes not specified in the description.)