Evidence (8066 claims)
Adoption
5586 claims
Productivity
4857 claims
Governance
4381 claims
Human-AI Collaboration
3417 claims
Labor Markets
2685 claims
Innovation
2581 claims
Org Design
2499 claims
Skills & Training
2031 claims
Inequality
1382 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 417 | 113 | 67 | 480 | 1091 |
| Governance & Regulation | 419 | 202 | 124 | 64 | 823 |
| Research Productivity | 261 | 100 | 34 | 303 | 703 |
| Organizational Efficiency | 406 | 96 | 71 | 40 | 616 |
| Technology Adoption Rate | 323 | 128 | 74 | 38 | 568 |
| Firm Productivity | 307 | 38 | 70 | 12 | 432 |
| Output Quality | 260 | 71 | 27 | 29 | 387 |
| AI Safety & Ethics | 118 | 179 | 45 | 24 | 368 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 75 | 37 | 19 | 312 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 74 | 34 | 78 | 9 | 197 |
| Skill Acquisition | 98 | 36 | 40 | 9 | 183 |
| Innovation Output | 121 | 12 | 24 | 13 | 171 |
| Firm Revenue | 98 | 35 | 24 | — | 157 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 87 | 16 | 34 | 7 | 144 |
| Inequality Measures | 25 | 76 | 32 | 5 | 138 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 89 | 7 | 4 | 3 | 103 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 33 | 11 | 7 | 98 |
| Wages & Compensation | 54 | 15 | 20 | 5 | 94 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 27 | 26 | 10 | 6 | 72 |
| Job Displacement | 6 | 39 | 13 | — | 58 |
| Hiring & Recruitment | 40 | 4 | 6 | 3 | 53 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 11 | 6 | 2 | 41 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 6 | 9 | — | 27 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
The paper studies principal-agent alignment using revealed preference techniques.
Stated methodological approach in the abstract; implies analytical use of revealed-preference methods for identification.
The AI's alignment (similarity of human and AI preferences) can be generically identified in the field setting, where only AI choices are observed.
Analytical/theoretical identification result presented in the paper using revealed preference techniques (as stated in abstract); no empirical sample reported in the abstract.
The AI's alignment (similarity of human and AI preferences) can be generically identified in the laboratory setting, where both human and AI choices are observed.
Analytical/theoretical identification result presented in the paper using revealed preference techniques (as stated in abstract); no empirical sample reported in the abstract.
The paper introduces the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's.
Paper proposes and defines a new theoretical model (model specification described in abstract).
Human decision makers increasingly delegate choices to AI agents.
Stated as motivation in the abstract; no empirical data or sample described in the provided text.
By formalizing the end-to-end transaction model together with its asset and incentive layers, EpochX reframes agentic AI as an organizational design problem focused on infrastructures where verifiable work leaves persistent, reusable artifacts and value flows support durable human-agent collaboration.
Theoretical framing and normative claim in the paper; no empirical evaluation demonstrating that this reframing yields measurable benefits.
Credits lock task bounties, allow budget delegation, settle rewards upon acceptance, and compensate creators when verified assets are reused.
Functional description of the credit mechanics and settlement rules within the proposed EpochX marketplace; presented as part of system design without empirical settlement or user-behavior data.
EpochX introduces a native credit mechanism to make participation economically viable under real compute costs.
Proposed economic/incentive mechanism described in the paper; no empirical cost analysis, pricing model validation, or participant economic outcomes reported.
These assets are stored with explicit dependency structure, enabling retrieval, composition, and cumulative improvement over time.
Design-level assertion about data model/asset graph in the EpochX proposal; no empirical results demonstrating retrieval/composition or measured cumulative improvement.
Each completed transaction can produce reusable ecosystem assets, including skills, workflows, execution traces, and distilled experience.
Architectural claim about artifacts produced per transaction in EpochX; described as a design goal rather than backed by empirical evidence or deployment data.
Claimed tasks can be decomposed into subtasks and executed through an explicit delivery workflow with verification and acceptance.
Design description of the workflow and verification/acceptance mechanisms in the proposed EpochX architecture; no empirical testing or metrics reported.
EpochX treats humans and agents as peer participants who can post tasks or claim them.
Architectural/design specification in the paper describing participant roles and interactions; no empirical validation provided.
We introduce EpochX, a credits-native marketplace infrastructure for human-agent production networks.
System/design description in the paper (architectural proposal); no deployment, user study, or evaluation results reported.
Google has been pioneering machine learning usage across dozens of products.
Contextual statement in the abstract about the organization's activity; asserted without empirical detail in abstract.
The techniques and approaches described can be generalized for other framework migrations and general code transformation tasks.
Authors' stated expectation/generalization claim in the abstract; no empirical evidence or cross-framework experiments reported in the abstract.
The system creates a virtuous circle where effectively AI supports its own development workflow.
Conceptual claim supported by the system's design and reported improvements that enable iterative AI-assisted development; described qualitatively in the paper.
Our approach dramatically reduces the time (6.4x-8x speedup) for deep learning model migrations.
Quantitative speedup figure reported in the paper's abstract (6.4x-8x); likely based on measured migration times on demonstrated cases, though the abstract does not state sample size or exact experimental setup.
The system accelerates code migrations in a large hyperscaler environment on commercial real-world use-cases.
Reported demonstration and evaluation in a hyperscaler (commercial) environment using real-world cases as described in the paper; no detailed sample size given in abstract.
We define quality metrics and AI-based judges that accelerate development when the code to evaluate has no tests and has to adhere to strict style and dependency requirements.
Design and implementation of quality metrics and AI-based judges described in the paper; claimed acceleration of development workflow (no numeric quantification in abstract).
We built an AI-based multi-agent system to support automatic migration of TensorFlow-based deep learning models into JAX-based ones.
System implementation and description in the paper; demonstration on real-world code migration tasks in a hyperscaler environment (qualitative description in abstract).
The productivity channel raises corporate cash flows and is equity-bullish.
Model mechanism described in the paper: productivity effects of AI increase corporate cash flows which, within the model, produce an equity-bullish effect on the ERP/valuations.
AI methods improve sustainability disclosure (disclosure to sustainability).
Stated in the review as an outcome of employing AI for ESG analytics and sustainability reporting; specific supporting studies or sizes are not provided in the excerpt.
AI methods improve risk management (managing risk) in sustainable finance.
Claim synthesized from literature reviewed on AI applications in climate risk analytics and risk modeling; no numerical sample details provided in the excerpt.
AI methods improve portfolio management (managing portfolio) in sustainable finance contexts.
Asserted by the review as part of the assessment of AI effectiveness for managing portfolios and risk in sustainable investing; no quantitative sample size or effect estimate reported in the excerpt.
AI methods (including machine learning, natural language processing, predictive analytics) improve ESG measurement.
Paper claims this as a conclusion from its review of studies applying AI techniques to ESG scoring and analytics; no primary sample sizes or effect estimates presented in the excerpt.
AI facilitates the real-time tracking of environmental and social risks.
Claim reported in the paper as a synthesized finding from reviewed literature on AI applications in sustainability and climate/ESG analytics; no numeric sample size provided.
AI drastically enhances the ESG performance analysis, sustainable investment plan, and transparency of the companies.
Statement in the paper summarizing results from a literature review of studies on AI/ML, NLP, predictive analytics, and sustainability reporting (systematic review synthesis). No specific primary study sample size reported in the excerpt.
Efficient conversion of R&D into technological barriers is key to avoiding the 'AI trap'; new energy vehicle firms should prioritize R&D efficiency, translate innovation into stable returns, and maintain sound financial conditions.
Paper's conclusion/recommendation derived from empirical findings (2013–2023 sample) linking R&D conversion/patent transformation and intelligent equipment output to reduced financial risk from AI dependence.
Strong knowledge or intelligent equipment output and effective patent transformation mitigate the financial risks associated with AI dependence.
Moderation and heterogeneity tests reported in the paper using the same sample (listed NEV and automobile manufacturers, 2013–2023) indicate these factors reduce the adverse effect of AI dependence on financial safety.
Robustness checks using clustered standard errors confirm the stability of all key coefficients.
Abstract states robustness checks were performed using clustered standard errors and that these confirm stability of key coefficients (no additional statistics reported in abstract).
Time effects are pronounced, with positive and significant shifts in 2020 (+7.02) and 2022 (+8.10) relative to the baseline year, reflecting acceleration of digital public administration in the post-pandemic period.
Reported time-effect coefficients in the panel specification (years relative to baseline). Abstract gives +7.02 for 2020 and +8.10 for 2022. No p-values shown in abstract but described as positive and significant.
Random effects (RE) models show a positive cross-country correlation between AI readiness and e-government development, with a coefficient of 0.35 (p < 0.001).
RE model reported in abstract for AI readiness (presumably GAIRI) vs EGDI. Reported RE coefficient = 0.35 (p < 0.001). Sample for GAIRI–EGDI reported as 170 countries (2020–2024).
Random effects (RE) models show a positive cross-country correlation between the AI Vibrancy Score and e-government development, with a coefficient of 2.55 (p < 0.001).
RE model reported in abstract for the AIVS–EGDI relationship. Sample for AIVS–EGDI reported as 36 countries (2018–2022). RE coefficient reported = 2.55 (p < 0.001).
Within-country improvements in AI readiness (Government AI Readiness Index) are positively and robustly associated with higher levels of e-government development, with the FE estimate equal to 0.17 (p < 0.001).
Panel data analysis using fixed effects (FE) on the GAIRI–EGDI sample (Government AI Readiness Index vs E-Government Development Index). Reported FE coefficient = 0.17 with p < 0.001. Sample referred to in abstract for GAIRI–EGDI: 170 countries (2020–2024).
Cheaper search improves learning and consumer surplus.
Analytical results from the paper's theoretical model of agentic two-sided markets; steady-state characterization of dynamics under varying search cost parameters. No empirical sample or experimental data reported.
The dataset, contexts, annotations, and evaluation harness are released publicly.
Paper states that dataset, contexts, annotations, and evaluation harness are released publicly (release / open-source claim).
A structured 2,000-token diff-with-summary prompt outperforms a 2,500-token full-context prompt (enriched with execution context, behaviour mapping, and test signatures) across all 8 models.
Direct prompt/context-size comparison across the 8 models on SWE-PRBench; reported that the 2,000-token diff-with-summary prompt yields better performance than the 2,500-token full-context prompt with extra enrichments.
The LLM-as-judge framework used for evaluation is validated at kappa = 0.75.
Inter-judge validation reported in paper (agreement metric kappa reported as 0.75). Specific validation sample size not stated in the excerpt.
Pull requests are drawn from active open-source repositories, filtered from 700 candidates using a Repository Quality Score.
Dataset curation procedure reported: initial pool of 700 candidate repositories/PRs filtered by a Repository Quality Score to produce the final benchmark.
We introduce SWE-PRBench, a benchmark of 350 pull requests with human-annotated ground truth for evaluating AI code review quality.
Dataset construction described in paper: benchmark contains 350 pull requests with human annotations. Pull requests drawn from active open-source repositories and filtered from 700 candidates using a Repository Quality Score.
The paper concludes by articulating expected outcomes for management practice and proposes a research agenda calling for future mixed-methods validation of the framework.
Stated conclusion and explicit call for mixed-methods validation; no validation results provided in this paper.
The review derives constructs, hypothesized links among them, and governance implications for managing and institutionalizing workplace AI.
Paper reports that reviewed sources were used to derive constructs and governance implications; this is a conceptual derivation rather than empirical testing.
The framework and synthesis can be used to diagnose patterns of disengagement and pilot-to-production failure in corporate AI initiatives.
Proposed analytical structure derived from literature synthesis and conceptual mapping; intended as a diagnostic tool but not empirically validated within this paper.
The paper integrates adoption frameworks (TAM and TOE) with evidence on human-AI interaction to produce a scaling-oriented conceptual framework for diagnosing disengagement and pilot-to-production failures.
Comparative conceptual analysis and framework building based on reviewed literature; no new empirical validation reported.
Integrating technological, human, and organizational capabilities is important to maximize the benefits of AI in smart manufacturing.
Conclusion based on thematic patterns in interviews, observations, and document analysis from purposively sampled supply chain and production professionals; identified as an implementation implication.
Firms adopting AI-driven forecasting and inventory strategies can achieve higher operational agility, better strategic resource alignment, and maintain a competitive advantage in dynamic manufacturing contexts.
Synthesis and implications drawn from thematic analysis of interviews, site visits, and documents from purposively sampled industry practitioners; presented as study conclusions rather than quantitatively tested outcomes.
AI supports sustainability initiatives within manufacturing operations.
Thematic analysis of practitioner interviews and organizational documentation where respondents linked AI-based forecasting/inventory optimization to sustainability outcomes (e.g., waste reduction).
AI improves supply chain coordination among partners and internal functions.
Interview and document-based thematic findings from purposively sampled supply chain managers and industry experts reporting enhanced coordination following AI adoption.
AI contributes to operational resilience in manufacturing supply chains.
Qualitative evidence from interviews and organizational documents indicating that AI-enabled forecasting and inventory controls improve firms' ability to adapt to disruptions; thematic analysis produced resilience as a reported benefit.
Organizational readiness, skilled personnel, data quality, and robust technological infrastructure are critical factors influencing AI effectiveness.
Recurring themes identified via thematic analysis of semi-structured interviews with supply chain and production professionals, corroborated by observational site visits and organizational documents from purposive sample.