Evidence (2432 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Labor Markets
Remove filter
Integration of risk management with strategy-setting and operational processes is essential to realize RM benefits.
Thematic findings from the literature review and recommendations in established frameworks (ISO 31000, COSO ERM); synthesized across peer-reviewed and practitioner literature.
An embedded risk culture and clear accountability across the organization are necessary enablers for effective risk management.
Repeatedly reported across reviewed literature and standards (e.g., ISO/COSO) in the thematic synthesis; supported by multiple secondary sources in the ten-year scope.
Leadership and governance commitment (board and senior management buy-in) is a core component required for effective risk management implementation.
Consistent identification of leadership/governance as an enabling factor across multiple peer-reviewed articles, books, and risk frameworks synthesized in the review; thematic analysis of literature over the last ten years.
The task frontier expands: new tasks become profitable and are created endogenously as coordination costs decline.
Analytical derivation in the model (proposition about task frontier) and simulation exercises that permit endogenous task entry.
Aggregate output increases when coordination costs fall because reduced frictions and endogenous task creation raise productive capacity.
Analytical result (one of the five propositions) showing comparative statics of output with respect to coordination compression; supported by calibrated numerical simulations.
Lower coordination costs expand managers’ spans of control (managers can supervise more subordinates).
Analytical comparative statics derived in the model (one of the five propositions) and corroborating numerical simulations with heterogeneous agents.
A one standard-deviation increase in AI adoption causally increases employment in occupations requiring complex problem-solving and interpersonal skills by 1.8%.
Same panel (38 OECD countries, 2019–2025) and AI Adoption Index; IV estimation with occupational employment classified by task type (complex problem-solving & interpersonal); fixed effects and robustness checks reported.
Overinvestment increases inequality (greater tail concentration of income).
Model computations showing that exponential returns amplify income at the top; comparative statics indicate inequality measures rise with greater investment/technology under lognormal wage assumption.
Overinvestment increases measured GDP (output).
Comparative statics in the theoretical model linking higher private investment/technology adoption to higher aggregate output; model shows positive effect on measured GDP despite welfare loss possibilities.
The exponential returns to skill and technology create strong private incentives for agents to escalate skill (education) investment toward the high tail of the distribution (an educational arms race).
Equilibrium analysis and comparative statics in the theoretical model showing that marginal returns to additional investment are increasing toward the distribution tail, producing higher optimal private investment at the top relative to social optimum.
When wages follow a lognormal distribution, technological progress makes wages increase exponentially in both skill and technology.
Analytical derivation in the paper's economic model that assumes a lognormal wage distribution and specifies wages as an exponential function of skill and a technology parameter; result follows from model algebra (no empirical data).
Research priorities include developing robust measures of AI adoption and using causal methods (difference-in-differences, synthetic controls, RDD, IV) to estimate effects of AI and regulation on productivity, employment, and inequality.
Methodological recommendations in the report based on identified evidence gaps and normative evaluation of empirical priorities.
The American Artificial Intelligence Initiative emphasizes R&D and innovation leadership, standards development, workforce readiness, and fostering 'trustworthy AI' (transparency, fairness, accountability).
Primary source policy documents from the U.S. American Artificial Intelligence Initiative reviewed in the report.
The paper introduces a Predictive Skill Gap Intelligence Hub — an AI-driven platform that combines macro- and micro-level indicators with probabilistic growth models and intelligent skill-synthesis to proactively forecast regional and sectoral labor demand–supply gaps.
Description of system architecture and modeling approach in the paper (methods section). No numerical evaluation metrics or datasets provided for this descriptive claim.
Recommended research priorities for economists include measuring how adoption changes task mixes and wages, quantifying verification/remediation costs, estimating productivity gains net of security/IP costs, and studying market dynamics from centralized model providers.
Author recommendations based on identified gaps in the empirical literature synthesized by the paper.
Recommended policy levers include data-governance rules, provenance and watermarking standards, liability frameworks, copyright clarifications, competition policy, and taxes/subsidies to internalize externalities.
Policy recommendations synthesized from legal, regulatory, and economic literatures within the review; presented as qualitative guidance rather than tested policy interventions.
A structured three-stage framework (input/process/output) clarifies where different risks and regulatory rules apply to generative audiovisual systems.
Framework presented in the paper as a conceptual synthesis of reviewed literatures; supported by cross-references to legal, technical, and ethical sources within the review.
The paper introduces IJOPM’s Africa Initiative (AfIn) to support Africa-based OSCM research, outlining motivation, objectives, review process, and researcher support mechanisms.
Descriptive account within the paper (administrative/initiative description rather than empirical evidence).
The paper proposes specific metrics and empirical follow-ups (e.g., generation-to-verification throughput ratios, defect accumulation rates, time-to-acceptance for machine-generated artifacts, incident rates attributable to unverified AI outputs) to validate the model.
Explicit recommendations and measurement proposals listed in the paper; no empirical implementation provided.
Recommended next steps include building and calibrating ABMs with agent heterogeneity, prototyping technical implementations of token verification (proof-of-query receipts, cryptographic attestation), and red-teaming for spoofing/evasion.
Paper's research & policy next-steps and operational recommendations; no implementation results included.
Enhanced gross‑flows estimation using longitudinal microdata can better track transitions (job-to-job, upskilling, unemployment spells) and measure occupational churn and reallocation.
Established econometric practice cited in paper; recommendation to use panel/admin microdata (CPS longitudinal supplements, LEHD/LODES, UI records); no new empirical results but aligns with standard methods.
Team Situation Awareness (shared perception, comprehension, projection) remains a useful analytic anchor for HAT even with agentic AI.
Conceptual analysis mapping Team SA components onto agentic AI interactions; literature review of Team SA utility in HAT contexts.
Automated equivalency systems require algorithmic oversight features (audit trails, human-in-the-loop checks) to maintain trust and labor-market legitimacy.
Governance recommendation following best practices in algorithmic accountability; not supported by empirical testing of oversight mechanisms in this context.
AI tools (automated document parsing/NLP, translation, equivalency-prediction classifiers, anomaly detection) can scale credential processing and reduce transaction costs and processing time.
Paper cites potential AI capabilities and application areas; the claim is inferential from known AI functionalities, with no implementation benchmark or throughput numbers provided.
Continuous monitoring and observability for performance, compliance, and drift are essential to maintain operational stability and detect model or process degradation.
Prescriptive claim grounded in engineering practice and comparative analysis of failure modes; supported by illustrative deployments; no quantitative evaluation of monitoring impact reported.
Core governance components should include policy enforcement integrated into development and deployment pipelines, risk controls for data/model behavior/automated actions, explicit human-in-the-loop and human-on-the-loop oversight, continuous monitoring/logging/incident-response, and role-based governance structures linking legal, compliance, IT, and business units.
Prescriptive design based on literature synthesis and practitioner experience; described as core components in the proposed reference pattern (conceptual, case-illustrated).
Research needs include empirically measuring prevalence and average loss from prompt fraud incidents, evaluating effectiveness and cost-effectiveness of technical mitigations (watermarking, provenance), and modeling firm-level investment decisions under varying regulatory/insurance regimes.
Authors' recommended agenda for further research based on identified gaps in the paper's qualitative analysis.
All data are openly available at https://www.antscan.info.
Explicit statement of public repository/portal and URL provided in the paper.
The dataset includes metadata such as taxonomic labels, collection/locality data, and links to genome projects where available.
Paper states dataset contents include whole-body volumes/meshes and associated metadata (taxonomic labels, locality, genome links).
The scanning pipeline was optimized and standardized to enable digitizing hundreds to thousands of specimens.
Authors describe an optimized, standardized pipeline and cite the achieved output (2,193 scans) as demonstration.
The project demonstrated a high-throughput application of synchrotron X-ray microtomography for whole-organism digitization at scale.
Combination of method (synchrotron microCT), standardized pipeline, and production of 2,193 scans presented as evidence of high-throughput capability.
Imaging modality used is synchrotron X-ray microtomography (high-resolution 3D imaging).
Method section details use of synchrotron X-ray microtomography for whole-body imaging.
Scans were acquired with standardized parameters to facilitate automated and replicable analysis and benchmarking.
Paper describes a standardized acquisition protocol and pipeline (synchrotron X-ray microtomography) and notes standardized parameters and metadata format.
The dataset covers taxonomic breadth of 212 genera and 792 species.
Reported counts of taxa included in the dataset as stated in the paper.
The Antscan project produced 2,193 whole-body 3D ant datasets (scans).
Reported dataset size in the paper: 2,193 whole-body 3D volumes/meshes produced via the described scanning pipeline.
Systems biology, constraint‑based metabolic modeling (e.g., FBA), kinetic modeling, and hybrid models are effective tools to predict fluxes and identify metabolic bottlenecks.
Discussion and aggregation of modeling studies using COBRA/OptFlux frameworks, FBA simulations, and kinetic/dynamic modeling applied to engineered strains to predict flux changes and suggest genetic interventions; validated in multiple reported DBTL cycles.
Engineered microorganisms are maturing into modular, programmable “microbial factories” capable of producing complex chemicals, specialty compounds, and next‑generation biofuels.
Synthesis of multiple experimental case studies reported in the literature (bench and pilot scale fermentations) demonstrating microbial production of natural products, specialty chemicals, and biofuel molecules using engineered strains and heterologous pathways; methods include pathway assembly, enzyme engineering, and fermentation optimization.
China’s National Public Cultural Service System Demonstration Zone program raised employment in the cultural sector.
Multi-period difference-in-differences (DID) analysis exploiting staggered adoption of the Demonstration Zone designation across 280 prefecture-level Chinese cities, 2008–2021; primary outcome measured: city-level cultural-sector employment; models include city and year fixed effects.
A one standard deviation increase in regional AI exposure raises total factor energy efficiency (TFEE) by about 3.2% in Chinese cities.
Panel analysis of 274 Chinese cities over 2007–2021 using an AI exposure index and TFEE as outcome; causal estimation relies on an instrumental-variables strategy (instruments: U.S. robot-adoption patterns and geographic proximity to external AI clusters).
A research agenda prioritizing empirical evaluation, model transparency, and rigorous impact assessment is required to translate conceptual promise into measurable public value.
Explicit recommendation in the blurb identifying research priorities; not an empirical claim but a proposed course of action.
Illustrative vignettes show AI in action: logistics optimization for trade, AI models for national fiscal decision-making, and algorithmic job-acceleration for individual labor market navigation.
Reference to specific case vignettes contained in the book; these are illustrative scenarios rather than empirical case studies with measured outcomes.
Ten defining policy questions structure the book’s approach, turning abstract AI capabilities into operational policy choices.
Descriptive claim about the book's organization; verifiable by inspecting the book's table of contents (no external empirical data).
International comparability in these analyses is achieved using PPP adjustments for monetary measures and standardized occupation/task classifications (ISCO/ISCO-08) with harmonized baseline years and variable definitions.
Described data harmonization procedures across multi-country firm and worker datasets, including PPP adjustments and use of ISCO classification for occupations.
Adoption of advanced AI tools (especially generative AI) raises firm-level productivity on average.
Meta-analysis of firm-level panel studies using administrative tax and manufacturing surveys and proprietary AI-usage logs; difference-in-differences and event-study estimates comparing adopters vs non-adopters with firm fixed effects and robustness checks.
There is a need for standardized metrics to quantify benefits and costs of governed hyperautomation (e.g., ROI adjusted for compliance risk, incident rate per automation scale, oversight hours per automated transaction, model drift frequency and remediation cost).
Paper's recommendations and research agenda calling for standardized metrics and empirical studies; prescriptive statement rather than empirical finding.
The positive effect of digital rural development on AGTFP is robust to alternative variable constructions, sample adjustments, and endogeneity treatments (e.g., instrumental-variable/other methods).
Robustness exercises reported in the paper: re-specification of the digitalization measure, re-sampling/alternative sample specifications, and use of instrumental/other methods to address endogeneity.
Digital rural development in China significantly increases agricultural green total factor productivity (AGTFP).
Fixed-effects panel regression using provincial panel data for 30 Chinese provinces from 2012–2022 (≈330 province-year observations), with reported significance and robustness checks (alternative measures, sample adjustments, and endogeneity tests).
There is a widespread consensus across the reviewed literature on the need for worker upskilling, active labor‑market policies, and targeted support for displaced workers.
Policy recommendations recurring in the majority of the 17 peer‑reviewed papers synthesized in the review.
The framework supports counterfactual scenario simulations that vary capability diffusion, adoption rates, policy interventions, and firm behavior to explore how exposures might translate into outcomes.
Description of scenario and simulation capabilities in the methods: Agent-based experiments run with parameterized counterfactuals for diffusion, adoption, and policy levers.
Alternative training channels (self-education and professional retraining) are nontrivial contributors to the AI workforce supply.
Comparative analysis showing inclusion of self-education and retraining contributions in the aggregate coverage estimate (the 43.9% figure explicitly includes these channels); descriptive counts/estimates of non-degree trained entrants.