Evidence (2340 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Org Design
Remove filter
The systematic review followed PRISMA protocol and analyzed a corpus of 103 items (peer‑reviewed articles and institutional reports) published 2010–2024.
Explicit methodological statement in the paper describing PRISMA use and corpus size/timeframe.
Further longitudinal cost-benefit studies, scalability benchmarks, and cross-domain trials are needed to determine when on-prem RAG is the dominant economic choice.
Paper's research & evaluation recommendations calling for additional longitudinal and cross-domain empirical work; presented as a recommendation rather than an empirical finding.
Human-in-the-loop judgments were central to the paper's relevance/usefulness claims rather than relying solely on synthetic benchmarks.
Methods description explicitly states human evaluation by domain experts was used alongside quantitative benchmarks.
The study is limited by being a single‑country case; contextual factors (regulatory regime, infrastructure capacity, procurement practices) may limit generalizability and the study emphasizes institutional and ethical analysis rather than quantitative measurement of economic impacts.
Explicit limitations reported in the paper summarizing scope and emphasis.
Methods used include qualitative interviews with researchers and administrators, observation/documentation of tool use, mapping of data flows and third‑party dependencies, and normative/legal analysis contrasting local practices with GDPR principles.
Methods section of the paper as reported in the provided summary.
The study's empirical basis is a qualitative case study centered on environmental science research in Chile that adopts the GDPR as an organizing normative framework.
Paper description of study scope and normative framing (methods and focus described in Data & Methods).
There is a need for validated administrative and firm-level data on AI adoption, workplace monitoring, and worker outcomes, and for evaluation of policy interventions (mandated impact assessments, transparency requirements, worker representation rules) using randomized or quasi-experimental designs where feasible.
Research and measurement priorities set out in the commentary based on identified gaps; prescriptive recommendation rather than evidence-based finding.
The paper is a policy and legal commentary/synthesis and not an empirical causal study; it does not provide microdata on employment or wage effects but identifies plausible channels and institutional dynamics.
Author-stated methodology and limitations section describing type of study and data sources; explicitly reports lack of primary empirical data.
The federal U.S. approach to AI governance combines export controls for key AI hardware/software with a relatively permissive domestic regulatory stance that relies on executive guidance, voluntary standards, and sector-specific measures rather than comprehensive federal worker protections.
Comparative policy and legal review of federal-level instruments (export control lists, executive orders, agency guidance, proposed/final rules) described in the commentary; no primary empirical data or sample size.
The paper's conclusions are limited by reliance on secondary sources, heterogeneous cross‑study comparisons, limited causal identification of long‑run macro effects, and measurement challenges for AI‑driven intangible capital.
Authors' stated limitations section summarizing the nature of evidence used (qualitative literature review, secondary macro indicators, sectoral examples); this is an explicit self‑reported methodological limitation rather than an external empirical finding.
Methodology used in the paper is a narrative review relying on secondary sources (literature, legal cases, policy reports, empirical perception studies) and conceptual synthesis; no new primary data were collected.
Paper's Data & Methods section explicitly states narrative review and secondary-data analysis.
Important empirical research gaps remain (consumer willingness-to-pay for authenticated vs. synthetic content, labor-displacement elasticities, market concentration dynamics, and cost–benefit evaluations of regulatory options).
Explicit statement of limitations and research needs in the paper, based on the authors' narrative review and absence of primary empirical studies within the paper.
The paper's methodology is a secondary-data, narrative (qualitative) literature review; it contains no original empirical data or primary quantitative analysis.
Explicit methodological statement in the paper describing secondary data analysis and narrative synthesis; absence of primary datasets or statistical analyses.
Further causal, experimental research (randomized deployments) is needed to precisely quantify net productivity and labor reallocation effects of AI agents.
Paper's stated research priorities and explicit acknowledgement of limitations from observational design; no randomized trials reported in the study.
There are measurement challenges for quality-adjusted productivity—errors and downstream effects may reduce net benefits of agent automation and are under-measured in the study.
Authors' noted limitations and concerns about quality-adjusted productivity measurement (error rates, downstream externalities) based on observational deployment experience; no formal measurement of downstream costs reported.
Small-scale, domain-specific deployments of Alfred AI limit external validity to other industries or larger firms.
Deployment context described as small-scale e-commerce; authors note generalizability limitations stemming from domain- and scale-specific nature of the experiments.
Because the study is observational and non-randomized, causal claims about the effect of AI agents on productivity and labor are limited.
Study design explicitly described as applied experimentation and observational deployments (non-randomized); potential confounding and selection biases acknowledged by the authors.
Researchers and firms should measure generation throughput, verification throughput, defect accumulation rates, mean time to detection/fix, costs per incident, and the marginal value of additional verification capacity to evaluate the framework's claims.
Prescriptive measurement priorities listed in the paper as recommendations for empirical validation.
The abstract reports no empirical tests, simulations, or field experiments; empirical validation of the framework is left for future work.
Direct observation of the paper's abstract and methods description indicating lack of empirical validation.
The paper's contribution is primarily conceptual/architectural rather than empirical.
Explicit statement in the paper and absence of reported empirical tests, simulations, or field experiments in the abstract and methods section.
There are limited standardized measures of 'AI capital,' scarce data on firm-level AI investment and implementation quality, and few long-run causal estimates of AI’s effects on managerial productivity and labor outcomes.
Gap analysis based on literature review and methodological discussion within the book; observation about the state of available empirical evidence.
The paper is primarily conceptual/architectural and does not present large empirical studies quantifying the phenomenon across firms or repositories.
Explicit methodological statement in the paper describing its use of thought experiments, mechanism reasoning, and illustrative examples rather than empirical datasets.
The paper's conclusions are drawn from a mix of evidence types including literature review, surveys/interviews, case studies, usage-log or publication-metric analyses, and controlled experiments—although the abstract does not specify which of these were actually used or the sample sizes.
Explicitly noted in the Data & Methods summary as the likely underlying evidence types; the paper's abstract itself does not document original data or detailed methods.
There is a lack of large‑scale causal evidence on generative AI’s effects; the paper recommends RCTs, difference‑in‑differences, matched employer–employee panels, and longitudinal studies to fill empirical gaps.
Methodological critique and research agenda provided in the review; observation based on the authors' survey of the literature.
Policy interventions are needed for data protection, bias mitigation, model transparency, accountability, and public investments in workforce retraining to smooth transitions and reduce inequality.
Normative policy recommendations grounded in the review's synthesis of risks and distributional concerns; not an empirical claim but a recommendation.
New productivity metrics are needed to capture AI impacts, including time‑use changes, quality‑adjusted output, and accounting for intangible AI capital.
Methodological recommendation from the conceptual synthesis, motivated by limitations of existing measures discussed in the paper.
Further quantitative research is needed to measure task‑level productivity effects, skill‑depreciation trajectories, and market impacts of differential GenAI adoption; structural models could incorporate TGAIF to predict labor demand and wage effects.
Authors' stated research agenda and limitations acknowledged in the paper; this is a call for future empirical work rather than an empirical claim.
ChatGPT was used as the generative engine for the MLLM in the system implementation described in the paper.
Methods section: integration of AR overlays with an MLLM, with ChatGPT used as the generative engine (explicit in the summary).
The paper proposes measurable metrics such as projection congruence indices, alignment persistence measures, monitoring/oversight burden, and outcome variability/tail risks attributable to agentic autonomy.
Explicit metric proposals in the methods and metrics section of the paper; presented as part of a research agenda rather than empirically implemented.
The paper proposes specific empirical and analytic follow-ups — multi-agent simulations, lab experiments with humans and adaptive agents, field case studies, econometric analyses, and formal economic models — to test the conceptual claims.
Explicit methods and research agenda listed in the paper; these are recommended future methods, not evidence.
Agentic AI is characterized by three properties that drive structural uncertainty: open-ended action trajectories, generative representations/outputs, and evolving objectives.
Definitions and taxonomy developed in the paper based on conceptual synthesis; presented as framing rather than empirically measured properties.
The framework provides sector-specific implementation guidance tailored to healthcare and public administration, accounting for existing governance and regulatory structures.
Case/sector guidance sections offering practical recommendations and considerations for deployment in those sectors; design-oriented, not empirically piloted in the paper.
DAR identifies four trigger classes that govern transitions between authority states: data superiority, contextual judgment requirements, risk thresholds, and ethics/legal overrides.
Conceptual derivation and classification in the framework; mapping of trigger types to transition rules. Theoretical, no empirical data.
The Dynamic Authority Reversal (DAR) framework formalizes four discrete intra-episode authority states: Human-Leader/AI-Follower (HL), AI-Leader/Human-Follower (AL), Co-Leadership (CO), and Mutual Override (MO).
Formal conceptual specification and formal modeling within the paper; definitions of the four states and their roles. No empirical sample; theoretical/design artifact.
Further quantitative and comparative research is needed to measure net productivity effects, skill trajectories, and generalizability across firm types and industries.
Authors' methodological assessment and limitations section noting single-firm qualitative design (Netlight) and rapidly evolving toolchains; recommendation for future empirical work.
Another important gap is quantifying complementarities between AI and different skill types (evaluative vs. generative tasks).
Review observation that existing empirical work has not systematically quantified how AI productivity gains vary with worker skill composition and complementary roles.
Key research gaps include a lack of long-run causal evidence on the effects of LLMs on firm-level innovation rates, business formation, and industry structure.
Explicit identification of gaps in the literature within the nano-review; the review states that most studies are short-term, task-level, or descriptive.
Study limitations include reliance on perceptual measures (rather than solely objective performance), heterogeneity across institutional samples, and likely correlational rather than strictly causal identification.
Authors' own noted limitations in the paper's methods section: mixed-methods design using perceptions from questionnaires and interviews, sample heterogeneity across multinational institutions, and quantitative analyses that are associative rather than strictly causal.
There is a lack of causal evidence on the long-run impacts of AI-driven HRM on employment, wages, and firm survival—this is a key research gap identified by the review.
Explicitly stated research gap in the review based on assessment of methodologies and findings across the 47 included studies.
A systematic review following PRISMA identified 47 peer-reviewed studies (2012–2024) on data-driven HRM and workforce resilience from Scopus, Web of Science, and Google Scholar.
Explicit review protocol and search/screening results reported by the paper (PRISMA-based), final sample size = 47 studies.
There is a need for causal, longitudinal studies quantifying economic returns of ERP-AI integration and for measurement frameworks for quality-adjusted decision improvements.
Stated limitation and research opportunity in the review; reviewers found scarcity of longitudinal causal studies in the 2020–2025 literature.
Empirical validation on experimental or field data is needed to fully establish k-QREM's practical applicability; current results are based on numerical examples and simulations.
Paper's methodology and validation section: validation confined to two numerical example datasets and simulation studies; authors acknowledge lack of real experimental/field validation and propose it as future work.
Extensions such as Bayesian hierarchical estimation and integration with multi-agent reinforcement learning are promising future directions but not implemented in the paper.
Authors' discussion of future work and limitations; no empirical or methodological implementation presented for these extensions in the current paper.
k-QREM explicitly models heterogeneity both across cognitive levels (different proportions of players at each level) and within levels (stochastic variability among players assigned to the same level).
Model specification: the paper defines level-specific quantal response functions and allows distributions over player types within each level (theoretical/modeling choices demonstrated in equations and architecture).
k-QREM is a hierarchical quantal-response model that nests the Cognitive Hierarchy Model (CHM) and Quantal Response Equilibrium (QRE) as special or limiting cases.
Analytical model construction in the paper: k-level hierarchical formulation showing CHM (discrete levels, deterministic best-response limit) and QRE (single-level stochastic best-response) arise as special/limiting parameterizations of k-QREM (model derivation/proofs provided).
There is a need for empirical research to quantify net economic impact (productivity gains vs governance costs), effects on employment composition and wages, and market outcomes from alternative governance architectures.
Explicit research gaps listed in the paper; recommendation for future empirical strategies (difference-in-differences, event studies, randomized pilots, instrumental variables) and suggested data sources.
The article’s evidence is predominantly practitioner-driven and illustrative, relying on qualitative case evidence rather than systematic quantitative causal estimates.
Explicit statement in the paper’s Data & Methods section describing nature of evidence and limitations; methods listed include synthesis, comparative analysis, illustrative architectures, and anecdotal cases.
Key technical components of the pattern include low-code platforms for rapid governed app development, RPA for deterministic process automation and legacy integration, and generative AI for document understanding, conversational interfaces, and decision support — with guardrails.
Paper’s component list and rationale based on practitioner experience and multi-sector examples; presented as recommended components in the reference architecture; no experimental validation of component selection given.
The proposed layered deployment pattern integrates organizational governance (roles, policies, decision rights), technical architecture (platforms, APIs, data flows), and AI risk management (controls, monitoring, human-in-the-loop).
Design and architectural proposal within the paper; described via illustrative deployment patterns and reference architectures. This is a descriptive claim about the proposed pattern rather than an empirical effect.
There is a need for empirical research (empirical studies quantifying prompt-fraud incidents and losses, field experiments comparing control portfolios, and economic models of optimal investment in AI controls).
Explicit research agenda and limitations acknowledged by the authors noting lack of empirical prevalence data and need for operational validation.