Evidence (4560 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Productivity
Remove filter
There is substantial heterogeneity in effects (I^2 = 74%), indicating variability across studies.
Meta-analytic heterogeneity statistic reported in the paper (I^2 = 74%).
This study analyzes 28 papers (secondary studies and research agendas) published since 2023.
Systematic literature review conducted by the authors of secondary studies and research agendas; sample size explicitly reported as 28 papers; timeframe specified as 'since 2023'.
Three contributions are presented: the Agentic AI Framework (AAF 3.0); a cross-domain synthesis formalising the inverse evidence–complexity relationship; and a phased sociotechnical roadmap integrating governance sequencing, reimbursement reform, and equity safeguards.
Descriptive claim about the paper's outputs. These contributions are stated in the abstract as the study's deliverables based on the narrative review and synthesis of 81 sources.
Agentic AI is defined as autonomous, goal-directed systems capable of multi-step workflow coordination.
Definition provided by the authors within the paper (conceptual framing used for the review).
This structured narrative review of 81 sources (2020–2025) evaluates whether Agentic AI ... can support structural adaptation in ageing health systems.
Methodological statement in the paper: the study is a structured narrative review of 81 sources from 2020–2025.
The framework is depicted across organization areas with primary focus on strategic management and workforce decision-making and secondary focus on finance, operations, and marketing.
Descriptive claim based on the conceptual framework and its mapping to organizational domains within the paper. No empirical application or case studies reported.
This paper outlines a Human–AI Collaborative Decision Analytics Framework integrating five overlapping layers: data, AI analytics, business analytics interpretation, human judgment, and feedback learning.
Presentation of a conceptual framework developed by the authors (conceptual/modeling contribution). No empirical validation reported.
The results presented in the paper are based on a literature recherche, an analysis of individual tasks across different occupations (conducted within Erasmus+ projects), and discussions with trainers/educators.
Methodological statement from the paper; indicates the types of evidence used. The abstract does not provide numbers for analyzed tasks, the number of occupations, details of Erasmus+ projects, or counts of trainers/educators consulted.
Neither time constraints nor LLM use significantly change strategic foresight in the startup evaluation task.
Null findings reported from the same experimental comparisons in the 2 × 2 design (N = 348): no statistically significant effects of time constraints or LLM use on the strategic foresight outcome.
The study employed a 2 × 2 experimental design manipulating time constraints and LLM use.
Explicitly reported experimental design in the paper: two factors (time constraints, LLM use) crossed to form four conditions in the startup evaluation task.
The study used a sample of N = 348 participants.
Reported sample size in the paper's experimental study (startup evaluation task); participants across the 2 × 2 experimental design totaled 348.
The paper identifies key research gaps and proposes a future research agenda focused on human–AI interaction, organizational governance, and ethical accountability.
Conclusions/recommendations from the conceptual meta-analysis (paper-generated research agenda; no empirical testing reported in abstract).
This study presents a conceptual meta-analysis of interdisciplinary literature on AI-augmented decision-making in organizations.
Methodological statement of the paper (the paper itself is a conceptual meta-analysis); no primary empirical sample reported in the abstract.
Research has insufficiently modeled joint distributional outcomes and environmental performance, and lacks integrated evaluation of AI-enabled sustainable finance under heterogeneous disclosure regimes.
Review-level identification of methodological gaps across the surveyed literature (authors' synthesis of existing studies and their limitations).
There is a shortage of long-horizon causal evidence on non-linear coupling between digitalization and decarbonization, limiting robust policy inference.
Meta-level assessment in the review noting gaps in existing empirical literature (review authors' synthesis of the field; claim about research availability rather than primary data).
Competency mapping involves identifying and aligning the critical skills, knowledge, and abilities required for specific job roles.
Definition provided in the paper (conceptual).
A stratified random sampling method was employed to select a representative sample of 500 IT employees, based on a pilot study constituting 0.50 percent of the total population.
Sampling description provided in the methods section: stratified random sampling, sample size = 500, pilot study size referenced as 0.50% of population.
The study analyzes data from the period 2021 to 2023 using Multiple Regression Analysis as the principal analytical technique.
Methods statement provided in the paper (timeframe and analytical method).
The primary objective of this research is to examine the impact of AI adoption on competency mapping practices in the IT sector.
Explicitly stated research objective in the paper.
The study employs the Difference-in-Differences (DiD) method to estimate AI impacts on online labor markets over time.
Methodological statement in the abstract specifying the use of Difference-in-Differences for empirical identification; implementation details (controls, parallel trends checks, sample size) are not given in the abstract.
The Act instituted a rigid seven-percent per-country cap that allocates the same number of visas to India (population of 1.4 billion) as to Iceland (population of 400,000).
Statutory per-country cap (7% rule in the INA) combined with publicly available country population figures for India and Iceland; claim about identical allocation follows directly from the 7% rule.
The Immigration Act of 1990 established a ceiling of 140,000 employment-based green cards annually.
Statutory fact derived from the Immigration Act of 1990 and the Immigration and Nationality Act (INA) provisions setting employment-based annual numerical limits.
Python code and data required to replicate the results are provided in the paper's appendix.
Author statement that 'Python code and data for replication are included in the appendix.'
The empirical analysis uses a smooth-transition local projection model applied to U.S. productivity and EPU data.
Methodological statement in the paper describing the estimation approach and the data inputs; replication materials (Python code and data) are included in the appendix.
This study uses panel data from 30 Chinese provinces (2011–2022) and estimates a spatial simultaneous equations model using the Generalized Spatial Three-Stage Least Squares (GS3SLS) approach.
Described methodology in the paper: panel dataset covering 30 provinces over 2011–2022 (12 years), spatial simultaneous equations estimated by GS3SLS.
Deterministic automated verifiers provide objective pass/fail checks for task success.
Methods section: verifiers are deterministic and automated, enabling objective evaluation of whether an agent's trajectory accomplished the task.
Scale of experiments: seven agent–model configurations and 7,308 execution trajectories were used to compute pass rates and deltas.
Reported experimental scale in Methods: 7 agent–model configurations and a total of 7,308 agent execution traces collected and analyzed across tasks/conditions.
Each task was evaluated under three conditions: (1) no Skills, (2) curated (human-authored) Skills, and (3) self-authored (model-generated) Skills.
Experimental protocol described in Methods: three-arm evaluation per task across the SkillsBench benchmark.
SkillsBench benchmark: evaluates 86 tasks spanning 11 domains with deterministic, automated verifiers.
Dataset and benchmark description in the paper: SkillsBench contains 86 tasks across 11 domains and uses deterministic pass/fail verifiers for objective evaluation.
Research should prioritize dynamic, task-based models that include transitional frictions, heterogeneous agents, and sectoral structure to better measure AI exposure and impacts.
Methodological recommendation grounded in the paper's theoretical critique of static occupation-level automation metrics and noted empirical gaps.
Timing uncertainty and measurement challenges make forecasting the pace and scale of AI-induced employment change inherently uncertain.
Methodological limitations section noting uncertainty in AI adoption speed and difficulties mapping capabilities to tasks and predicting new occupation emergence.
Research agenda: there is a need for causal studies on AI’s impact on accounting labor demand and firm performance, analyses of distributional effects across firm sizes and industries, and evaluation of regulatory frameworks for reliable, interpretable AI in financial reporting.
Author-stated research priorities drawn from gaps identified in the literature review; not an empirical finding.
Policy implications include workforce retraining, standards for AI auditability and transparency, and regulation balancing innovation and controls (privacy, fraud prevention).
Policy recommendations based on identified risks and barriers discussed in the paper rather than empirical policy evaluation.
For stronger causal evidence, recommended empirical methods include difference-in-differences on adopting firms vs. controls, matched samples, and randomized pilots for particular tools, supplemented by qualitative interviews.
Methodological recommendations stated in the paper (not an empirical finding); no implementation/sample reported in the abstract.
Actionable research priorities include running larger-scale field trials linking game use to observed land-use and economic outcomes, developing validation protocols for game-backed models against empirical on-farm data, studying heterogeneity of impacts, and designing incentive mechanisms that leverage game-demonstrated profitability co-benefits.
Synthesis-driven recommendations based on identified evidence gaps—specifically the predominance of small-scale/qualitative studies and lack of long-term/causal evidence.
Rigorous economic evaluation (RCTs, quasi-experiments) is needed to quantify how game-enhanced DSTs affect investment, land-use choices, emissions outcomes, and farm incomes.
Chapter recommendation grounded in observed gaps: the literature lacks sufficiently rigorous causal impact evaluations; current evidence is largely qualitative or observational.
The empirical strategy uses baseline panel regressions with standard controls (e.g., firm size, performance, leverage) and fixed effects to estimate the AI → pay relationship.
Methods section describing regression specifications including firm controls and fixed effects applied to the A-share firm panel.
Data consist of a panel of Chinese A-share listed companies covering 2007–2023.
Data description in the paper specifying the sample period and population (A-share listed firms, 2007–2023).
The firm-level AI application indicator is constructed via textual analysis of corporate disclosures (e.g., filings/annual reports) to capture AI application intensity.
Methodological description in the paper describing text-based construction of an AI application indicator from corporate disclosures for listed firms in the 2007–2023 sample.
Empirical validation of the integrated Kondratieff–Schumpeter–Mandel framework requires firm-level adoption and profitability data, sectoral investment series, and cross-country comparisons using panel methods and identification strategies (e.g., diff-in-diff, IV).
Methods/limitations section recommendation (explicitly states no single micro-econometric identification strategy was reported and outlines required data/methods).
The three frameworks (Kondratieff, Schumpeter, Mandel) are complementary: Kondratieff frames periodicity, Schumpeter provides micro-mechanisms of innovation-driven change, and Mandel foregrounds socio-political constraints and distributional outcomes.
Conceptual integration and comparative theoretical analysis (qualitative synthesis).
Kondratieff's framework is useful for identifying broad periodicities (recurring phases of expansion and stagnation) in capitalist development but is less specific about microeconomic mechanisms.
Theoretical review of Kondratieff literature and conceptual assessment (qualitative).
No new laboratory measurements or datasets are reported in the paper; the approach is methodological and conceptual rather than empirical.
Methods section and explicit statements within the paper noting absence of new data; verifiable by reading the paper.
These operators are presented as conceptual/theoretical bridges rather than immediately quantifiable laboratory units.
Explicit methodological statement in the paper emphasizing interpretive/theoretical intent; no empirical operationalization reported.
Policy recommendations include: invest in open metadata standards; fund pilot programs to evaluate ROI (earnings, placement, employer satisfaction); require model governance and periodic external audits for AI-assisted curriculum tools; and support smaller providers via shared infrastructure or accreditation hubs.
Explicit policy recommendations in paper (prescriptive).
Careful attention is needed to validity/reliability of assessments and to selection bias in employment outcome measurement.
Paper's methodological caveat (prescriptive); no empirical bias analysis provided.
Suggested evaluation metrics include placement rates, wage premiums, competency attainment, compliance scores, cost per qualification, and update latency.
Paper's recommended evaluation metrics (prescriptive).
Implementation requires integration with information systems for documentation, versioning, metadata, and audit trails, and benefits from continuous monitoring dashboards.
Paper's technical implementation recommendations (prescriptive).
Recommended analysis methods are qualitative (semi-structured interviews, focus groups, document review) and quantitative (surveys, competency mapping, statistical analysis of outcomes), plus systematic audit methods including traceability checks.
Paper's methods section (methodological specification).
Data inputs for the framework should include competency taxonomies, labor-market signals, regulatory requirements, learner assessment results, and stakeholder interviews.
Paper's data-input specification (descriptive).