Evidence (2432 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Labor Markets
Remove filter
Implementation of RATs requires instrumentation at the browser/platform level or via plugins and must address privacy/consent, storage/ownership, sharing controls, and interoperable trace formats.
Design and implementation considerations enumerated in the paper; this is a requirements statement rather than an empirical claim.
Analytical approaches compatible with RATs include sequence/trajectory mining, network analysis of associations/co-read graphs, embedding/clustering of trajectories, qualitative inspection of reflections, and experimental (A/B or RCT) evaluation of downstream effects.
Methods section of the paper listing suggested analytical techniques; these are proposed methods rather than applied analyses.
The paper is primarily theoretical and prescriptive: it synthesizes literature and proposes a framework and design guidelines rather than reporting large-scale empirical datasets or causal identification of economic outcomes.
Meta-claim about the paper's methods explicitly stated in the Data & Methods summary; based on the paper's methodological description.
Key measurable outcomes to assess Human–AI teams include accuracy/efficiency, robustness to novel cases, decision consistency, trust/misuse rates, training costs, and inequity indicators.
Prescriptive list of metrics offered by the authors as part of the research agenda and evaluation guidance; not empirically derived from a dataset in the paper.
Empirical evaluation strategies for Human–AI teams should include randomized interventions, field trials, lab experiments, phased rollouts (difference-in-differences), and structural models that allow interaction terms between human skill and AI quality.
Methodological recommendation in the paper; suggested study designs rather than implemented analyses.
Research priorities include empirical measurement of task‑level automation rates, firm and industry productivity effects, wage impacts across occupations, and diffusion patterns.
Paper's stated research agenda and identification of measurement gaps; based on methodological critique of current evidence base.
Measuring these productivity gains will be challenging because quality improvements, faster iteration, and creative outputs are harder to price/observe than lines of code.
Methodological argument about measurement difficulty; based on conceptual considerations, not empirical validation.
Measuring AI's economic impact requires new metrics that account for decision-value uplift, reduced tail-risk exposures, and dynamic gains from continuous learning; causal identification will require experiments or staggered rollouts.
Methodological recommendation backed by conceptual discussion of measurement challenges; no implementation of such measurement approaches is reported in the paper.
Performance and evaluation should be measured using forecast accuracy, decision lift/value added, latency, and false positive/negative rates.
Paper-prescribed evaluation metrics; presented as recommended practice rather than derived from empirical testing within the paper.
Core AI techniques for these frameworks include supervised/unsupervised ML, NLP for unstructured text, anomaly detection for control/transaction monitoring, and reinforcement/prescriptive models for recommendations.
Methodological claim listing standard ML/NLP/anomaly-detection techniques and prescriptive approaches; statement of methods rather than an empirical comparison of alternatives.
Next‑gen frameworks use large-scale structured (transactions, ledgers, KPIs) and unstructured sources (reports, news, contracts, call transcripts) to power models.
Descriptive claim listing data types the paper recommends; presented as design input requirements rather than empirically validated data-integration projects.
There is a need for quantitative studies and microdata on firm-level RM practices, AI adoption, and performance outcomes to measure effect sizes and causal pathways.
Stated research gaps and limitations in the review (lack of primary empirical quantification; heterogeneity across contexts).
The review's conclusions are limited by reliance on published literature (potential bias toward successful implementations), lack of primary empirical quantification (no effect sizes), and heterogeneity across organizational contexts limiting direct generalizability.
Explicit limitations stated in the paper summarizing scope and method (qualitative literature review, secondary evidence only).
Heterogeneity in system designs and deployment contexts complicates cross-site comparisons.
Limitations section and observed variation in platform architectures, degrees of automation, and governance across sites reported via descriptive data and interviews.
Non-random selection of institutions limits causal inference and external generalizability of the study's findings.
Study limitations explicitly state non-random site selection and heterogeneous deployments; methodological note that causal claims are constrained.
Estimation/calibration, stability assessment, and global sensitivity methods used: parameters calibrated/estimated on 2016–2023 data; equilibrium located; Jacobian eigenvalues computed for local stability; variance-based global sensitivity analysis performed over parameter space.
Methods section: description of parameter estimation/calibration, equilibrium computation, Jacobian-based stability analysis, and variance-based global sensitivity analysis.
The main empirical conclusions are based on a short annual panel (2016–2023) and a stylized aggregate interaction model; results should be interpreted with caution due to potential omitted variables, aggregation bias, and limited sample size.
Explicit limitations listed in the paper: short time series (eight annual observations), national aggregate data, simplified model structure, no firm/sector heterogeneity, possible endogeneity/measurement issues.
The empirical analysis uses annual, national-level aggregate Chinese series for 2016–2023 as proxies for AI capital, physical capital stock, and labor compensation (wage bill).
Data description in Data & Methods: annual Chinese aggregate series 2016–2023. Implied sample length: 2016–2023 inclusive (8 annual observations); national-level aggregates, no firm-level heterogeneity modeled.
The paper models interactions among AI capital, physical capital, and labor using a Lotka–Volterra (predator–prey type) system adapted to include self-limiting (saturation) terms.
Model specification described in Methods: deterministic Lotka–Volterra system with added self-limitation terms for three stocks (AI capital, physical capital, labor).
Instrumental-variable (IV) estimation is used to address endogeneity of AI adoption and to identify causal effects on employment and wages.
Paper states IV identification strategy applied to the 38-country panel; robustness checks and alternative specifications reported (paper refers to instrument details in full text).
The AI Adoption Index is constructed as a composite measure combining enterprise investment in AI, AI-related patent filings, and workforce/firm surveys on AI use across 38 OECD countries (2019–2025).
Paper's methodological description of the index construction; data sources enumerated as investment, patenting, and survey measures over the panel period.
The paper is entirely theoretical/analytical and does not report an empirical dataset.
Paper methodology section and abstract state primary tool is an analytical economic model; no empirical data or sample sizes are reported.
The same formal framework can be interpreted as a firm-level model where human skill investment maps onto AI/chatbot investment decisions.
Paper provides an alternative interpretation and formally maps agent skill-investment choices into an analogous firm R&D/AI-capital decision problem within the same mathematical framework.
Research and monitoring priorities for economists include task-level analyses of substitutability/complementarity, modeling adoption as a function of regulatory costs and reimbursement incentives, and evaluating long-run welfare and distributional effects.
Explicit research recommendations stated in the narrative review, based on gaps identified in the literature and evolving empirical questions.
Policymakers and payers should consider liability reform, reimbursement models that reward safe human–AI collaboration, funding for independent clinical validation, and measures to prevent market concentration.
Policy recommendations and implications derived from the narrative review's synthesis of regulatory, economic, and implementation challenges.
There is a need for validated administrative and firm-level data on AI adoption, workplace monitoring, and worker outcomes, and for evaluation of policy interventions (mandated impact assessments, transparency requirements, worker representation rules) using randomized or quasi-experimental designs where feasible.
Research and measurement priorities set out in the commentary based on identified gaps; prescriptive recommendation rather than evidence-based finding.
The paper is a policy and legal commentary/synthesis and not an empirical causal study; it does not provide microdata on employment or wage effects but identifies plausible channels and institutional dynamics.
Author-stated methodology and limitations section describing type of study and data sources; explicitly reports lack of primary empirical data.
The federal U.S. approach to AI governance combines export controls for key AI hardware/software with a relatively permissive domestic regulatory stance that relies on executive guidance, voluntary standards, and sector-specific measures rather than comprehensive federal worker protections.
Comparative policy and legal review of federal-level instruments (export control lists, executive orders, agency guidance, proposed/final rules) described in the commentary; no primary empirical data or sample size.
The report has limited primary quantitative impact evaluation and relies on policy texts and secondary sources rather than large-scale empirical measurement of AI’s economic effects.
Explicit limitations section in the report describing methods and data constraints.
Methodological needs for AI-era labor models include dynamic skill taxonomies, high-frequency labor data (job postings, firm-level automation measures), and uncertainty quantification.
Paper's Research & policy recommendations and Methodological needs section (explicit recommendations).
The scenario analysis framework varies economic growth, automation rates, policy interventions, and investment to produce probabilistic demand–supply gaps.
Methods description of scenario analysis components and the variables varied in scenario experiments (explicit in Data & Methods).
Intended users of the Hub include organizations, educational institutions, and policymakers to inform reskilling/education strategies, regional economic policy, and labor-market interventions.
Explicit statement of target users and use cases in the Key Points / Implications sections.
The system produces interpretable outputs for stakeholders: demand–supply trend analysis, geospatial hotspot maps, skill-gap radar charts, and policy simulation dashboards.
Paper's description of outputs and interactive visual analytics (listed output modalities).
The core modeling approach uses probabilistic growth modeling combined with intelligent skill synthesis to estimate future workforce requirements under alternative economic and policy scenarios.
Methods section describing the modeling components: probabilistic growth modeling and intelligent skill synthesis (architectural description).
The platform integrates multiple indicators such as regional economic growth projections, automation velocity, policy intervention strength, investment intensity, and market volatility (macro- and micro-level indicators).
List of input indicators given in the Data & Methods section of the paper (explicit enumeration of macro and micro variables).
Significant empirical gaps remain on long-term impacts (wage trajectories, employment composition, firm-level returns), verification/remediation cost quantification, and public-good risks of insecure code proliferation.
Cross-study synthesis explicitly identifying missing longitudinal and firm-level empirical research in the reviewed literature.
The paper's conclusions are limited by reliance on secondary sources, heterogeneous cross‑study comparisons, limited causal identification of long‑run macro effects, and measurement challenges for AI‑driven intangible capital.
Authors' stated limitations section summarizing the nature of evidence used (qualitative literature review, secondary macro indicators, sectoral examples); this is an explicit self‑reported methodological limitation rather than an external empirical finding.
Methodology used in the paper is a narrative review relying on secondary sources (literature, legal cases, policy reports, empirical perception studies) and conceptual synthesis; no new primary data were collected.
Paper's Data & Methods section explicitly states narrative review and secondary-data analysis.
Important empirical research gaps remain (consumer willingness-to-pay for authenticated vs. synthetic content, labor-displacement elasticities, market concentration dynamics, and cost–benefit evaluations of regulatory options).
Explicit statement of limitations and research needs in the paper, based on the authors' narrative review and absence of primary empirical studies within the paper.
The paper's methodology is a secondary-data, narrative (qualitative) literature review; it contains no original empirical data or primary quantitative analysis.
Explicit methodological statement in the paper describing secondary data analysis and narrative synthesis; absence of primary datasets or statistical analyses.
This paper is conceptual/theoretical and does not conduct primary empirical data collection.
Explicit methodological statement in the paper's Data & Methods section.
The paper is primarily conceptual/architectural and does not present large empirical studies quantifying the phenomenon across firms or repositories.
Explicit methodological statement in the paper describing its use of thought experiments, mechanism reasoning, and illustrative examples rather than empirical datasets.
The paper's conclusions are drawn from a mix of evidence types including literature review, surveys/interviews, case studies, usage-log or publication-metric analyses, and controlled experiments—although the abstract does not specify which of these were actually used or the sample sizes.
Explicitly noted in the Data & Methods summary as the likely underlying evidence types; the paper's abstract itself does not document original data or detailed methods.
There is a lack of large‑scale causal evidence on generative AI’s effects; the paper recommends RCTs, difference‑in‑differences, matched employer–employee panels, and longitudinal studies to fill empirical gaps.
Methodological critique and research agenda provided in the review; observation based on the authors' survey of the literature.
Policy interventions are needed for data protection, bias mitigation, model transparency, accountability, and public investments in workforce retraining to smooth transitions and reduce inequality.
Normative policy recommendations grounded in the review's synthesis of risks and distributional concerns; not an empirical claim but a recommendation.
New productivity metrics are needed to capture AI impacts, including time‑use changes, quality‑adjusted output, and accounting for intangible AI capital.
Methodological recommendation from the conceptual synthesis, motivated by limitations of existing measures discussed in the paper.
The paper is a policy-design and conceptual-architecture work and presents no original microdata or econometric estimates.
Methods section explicitly states absence of original empirical data; document contains policy proposals and modeling agenda only.
Token taxes are usage-based surcharges applied at the point of sale for model inference (i.e., charged per token or per inference request).
Paper's definitional specification and conceptual description; policy-design discussion (no empirical data).
Static equilibrium and representative-agent models neglect dynamic reallocation, task re-bundling, and firm-level heterogeneity, limiting their realism for forecasting labour outcomes under AI adoption.
Theoretical critique offered in the paper and referenced critiques in the literature; evidence is conceptual and based on model assumptions identified across studies.
Common empirical strategies (cross-sectional exposure correlations and panel-difference analyses) often lack strong causal identification due to endogeneity of adoption and unobserved confounders.
Surveyed analytical strategies and explicit critique in the paper noting endogeneity and confounding; evidence is methodological critique grounded in the literature's reliance on observational exposure measures.