Evidence (3103 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Human Ai Collab
Remove filter
With calibrated oversight that aligns accountability to real-world risks, AI can secure the profession’s future.
Normative/prognostic claim in the Article (argument that appropriate governance will preserve or strengthen the legal profession).
With calibrated oversight that aligns accountability to real-world risks, AI can improve service quality in legal services.
Normative/prognostic claim in the Article (argument that governance plus AI yields quality improvements). No empirical effect sizes reported in the excerpt.
While the risks of AI are real, they must not eclipse the opportunity: with calibrated oversight that aligns accountability to real-world risks, AI can expand access to legal services.
Normative claim and projected benefit argued by the authors (theoretical/argumentative; no empirical evidence in excerpt).
The framework provides a roadmap for coordinated response across educational institutions, government agencies, and industry to ensure workforce resilience and domestic leadership in the emerging agentic finance era.
Authors' proposed integrated roadmap (prescriptive recommendation; no empirical testing or outcome measurement reported in the provided text).
We develop a comprehensive government policy framework including: 1) Federal AI literacy mandates for post-secondary business education; 2) Department of Labor workforce retraining programs with income support for displaced financial professionals; 3) SEC and Treasury regulatory innovations creating market incentives for workforce development; 4) State-level workforce partnerships implementing regional transition support; and 5) Enhanced social safety nets for workers navigating career transitions during the estimated 5-15 year transformation period.
Author-presented policy framework and recommendations (policy design proposals and an asserted 5–15 year transformation timeframe; no empirical evaluation reported).
We propose a multi-layered integration strategy for higher education encompassing: 1) Foundational AI literacy modules for all business students; 2) A specialized "Agentic Financial Planning" course with hands-on labs; 3) AI-augmented redesign of core courses (Investments, Portfolio Management, Ethics); 4) Interdisciplinary project-based learning with Computer Science; and 5) A governance and policy module addressing regulatory compliance (NIST AI RMF, SEC regulations).
Proposed curricular framework presented by the authors (recommendation/proposal, not empirically tested within the paper).
The ultimate competitive edge lies in an organization's ability to treat AI not as a standalone tool, but as a core component of sustainable, long-term corporate strategy.
Concluding normative claim in the paper; presented as an interpretation/synthesis rather than supported by cited empirical evidence in the abstract.
Successful global expansion is no longer predicated solely on physical presence but on the deployment of scalable, localized AI models that navigate diverse regulatory, linguistic, and cultural landscapes.
Argumentative claim in the paper describing a strategic determinant for global expansion; no empirical sample or quantified outcomes presented in the abstract.
AI hyper-personalizes customer engagement.
Declarative claim in the paper about AI's effect on customer engagement personalization; no experimental or observational data reported in the abstract.
AI acts as an internal engine for operational agility by compressing R&D cycles.
Claim made in the paper asserting R&D cycle compression due to AI; no empirical data, sample size or quantitative measures provided in the abstract.
The strategic focus has transitioned from mere process automation to autonomous orchestration, where multi-agent systems independently manage complex, cross-border operations and real-time decision-making.
Analytic statement from the paper describing an observed/argued shift in strategic focus; no empirical methodology or sample reported.
Organizations leverage agentic workflows and domain-specific intelligence to catalyse strategic innovation and facilitate global expansion in the digital era.
Conceptual claim in the paper describing how organizations use specific AI capabilities; no empirical design or sample described in the abstract.
The rapid evolution of Artificial Intelligence (AI) has shifted from a disruptive trend to the fundamental operating layer of the modern enterprise.
Statement/assertion in the paper (conceptual/positioning claim); no empirical method, sample size, or statistical analysis reported in the abstract.
Transparency’s effectiveness in promoting data-sharing is amplified by, and dependent upon, user trust; fostering trust in AI may be a more vital prerequisite for data-sharing than implementing transparent designs.
Synthesis of experimental findings (N=240): transparency increased willingness only among users with pre-existing trust; null effect of transparency alone on actual sharing; authors conclude that trust moderates transparency effects and recommend focusing on trust-building.
Immediate sharing decisions were largely driven by intuitive System 1 processing rather than deliberative evaluation (System 2).
Interpretation of the pattern in experimental data (N=240): high, similar sharing rates across conditions despite differing stated willingness-to-share and measured privacy concerns; authors attribute this to dual-process dynamics (System 1 driving immediate behavior).
The positive effect of transparency on willingness to share was contingent on pre-existing user trust in AI, particularly for white-box systems.
Moderation analyses reported from the experiment (N=240): interaction between transparency (white-box vs black-box) and measured pre-existing trust in AI showed increased willingness-to-share only among users with higher trust, with the effect most pronounced for white-box systems.
We conducted a pre-registered online experiment (N=240) where participants interacted with a fictional sleep-optimization app and were randomly assigned to scenarios where data was processed by either a human expert, a transparent white-box AI, or an opaque black-box AI.
Pre-registered online experimental design described in paper; random assignment to three processing-entity conditions (human, white-box AI, black-box AI); sample size reported as N=240; measured outcomes included actual data-sharing and willingness to share, plus trust and privacy concerns.
A Metacognitive Co-Regulation Agent (in CRDAL) assists the Design Agent in metacognition to mitigate design fixation, thereby improving system performance for engineering design tasks.
Mechanistic claim supported by the paper's experimental results on the battery pack design problem showing CRDAL outperforming SRL and RWL; detailed measures of fixation reduction not provided in the excerpt.
The CRDAL system navigated through the latent design space more effectively than both SRL and RWL.
Empirical analysis on the battery pack design task comparing latent-space trajectories/exploration between CRDAL, SRL, and RWL; details on how 'more effectively' was quantified and sample size are not provided in the excerpt.
The CRDAL system achieves better design performance without significantly increasing the computational cost compared to SRL and RWL.
Empirical claim based on experiments on the battery pack design problem comparing computational cost across CRDAL, SRL, and RWL; exact computational metrics and sample size not provided in the excerpt.
In the battery pack design problem examined here, the CRDAL system generates designs with better performance compared to a plain Ralph Wiggum Loop (RWL) and the metacognitively self-assessing Self-Regulation Loop (SRL).
Empirical comparison on a battery pack design task between CRDAL, SRL, and RWL reported in the paper; exact number of test instances or runs not stated in the excerpt.
We propose a novel Co-Regulation Design Agentic Loop (CRDAL), in which a Metacognitive Co-Regulation Agent assists the Design Agent in metacognition to mitigate design fixation.
Methodological contribution presented in the paper (proposed system architecture). No empirical sample size reported for the proposal itself.
We propose a novel Self-Regulation Loop (SRL), in which the Design Agent self-regulates and explicitly monitors its own metacognition.
Methodological contribution presented in the paper (proposed system architecture). No empirical sample size reported for the proposal itself.
AlphaFold represents an 'oracle' breakthrough in AI for scientific discovery.
Cited as an example of an algorithmic breakthrough that changed a specific scientific subtask (protein structure prediction). The paper frames AlphaFold as a milestone in the history reviewed; no new experimental data presented.
Recommended regulatory responses include algorithmic transparency mandates, mandatory mental health risk audits, participatory co-design, human review of deactivations, and minimum wage protections aligned with ILO principles.
Authors' policy recommendations derived from the review's synthesis and identified psychological risks.
Phase Three employs AI for comprehensive sensitivity analysis while humans provide strategic interpretation.
Descriptive claim about the third phase of the framework and its use in the paper's applied test; presented as the intended role split between AI (computational sensitivity tasks) and humans (interpretation).
Phase One leverages AI for rapid market research aggregation and preliminary pro forma generation.
Descriptive claim about the first phase of the proposed three-phase framework as presented in the paper; conceptual rather than a separate empirical finding.
The framework achieved seventy-one to ninety percent time reduction while maintaining analytical quality comparable to traditional methods.
Empirical result reported from the controlled ChatGPT-4 test on the single 150-unit scenario comparing time to complete underwriting tasks versus traditional methods.
This research develops and empirically validates a three-phase framework for AI-augmented multifamily underwriting through controlled testing with ChatGPT-4 using a standardized 150-unit development scenario in Seattle's Greenwood neighborhood.
Controlled testing described in paper: use of ChatGPT-4 on a single standardized 150-unit development scenario in Seattle Greenwood to evaluate the proposed three-phase framework.
Generative artificial intelligence demonstrates significant promise for efficiency gains across financial services.
Introductory assertion in paper; general statement about the potential of generative AI, not directly derived from the paper's controlled test.
Opportunities arising from cyborg workflows include hyper-personalized narratives, democratized production, and ethical augmentation of underrepresented voices.
Forward-looking/interpretive claim in the paper describing potential benefits and opportunities; conceptual rather than empirically demonstrated in the excerpt.
Scalability is addressed via edge computing to support cyborg workflows.
Design/architectural claim in the paper mentioning edge computing as a scalability mechanism; no deployment-scale measurements reported in the excerpt.
The proposed workflows include robust bias mitigation strategies.
Paper asserts bias mitigation approaches are included and demonstrated in case studies; no quantitative fairness metrics or evaluation details provided in the excerpt.
Cyborg workflows produce enhanced creative output via iterative human–AI refinement.
Qualitative claim supported by case studies and examples presented in the paper (no quantitative creativity metrics or sample sizes reported in the excerpt).
Empirical evaluations validate 25-60% improvements in key metrics.
Paper states empirical evaluation results with a 25–60% improvement range; specific metrics, methods, and sample sizes are not provided in the excerpt.
Case studies in content generation, news curation, and immersive production demonstrate efficiency gains of up to 3x in throughput.
Reported results from unspecified case studies described in the paper; numeric claim provided but case study sample sizes and methodological details are not reported in the excerpt.
The paper proposes a comprehensive framework encompassing modular architectures, hybrid protocols, and real-time collaboration interfaces informed by cognitive science, AI engineering, and media studies.
Architectural and methodological proposal described in the paper (the claim is descriptive of the proposed system; no quantitative evaluation of the framework components provided).
Cyborg workflows fuse human judgment with agentic AI autonomous systems capable of goal-directed planning and execution.
Conceptual description and framework proposed in the paper (no empirical sample or trial details reported).
The study developed and validated a new AI Job Crafting Scale.
Authors created and psychometrically validated an AI Job Crafting Scale within the multi-source, multi-wave study sample (287 employee–leader dyads); scale development and validation procedures reported.
Work autonomy strengthens the positive impact of AI approach job crafting on work meaningfulness (positive moderation).
Moderation analysis in the multi-wave, multi-source survey of 287 employee–leader dyads showing a significant interaction between AI approach job crafting and work autonomy predicting higher work meaningfulness.
The positive effect of AI approach job crafting on career-relevant outcomes (career satisfaction and performance) operates via increased work meaningfulness (mediation).
Mediation analysis conducted on multi-wave, multi-source survey data from 287 employee–leader dyads using measures of AI approach job crafting, work meaningfulness, and career outcomes.
AI approach job crafting positively predicts employee performance.
Multi-source, multi-wave survey of 287 employee–leader dyads in China; performance likely assessed via leader ratings in the dyadic design and linked to employee-reported AI approach job crafting.
AI approach job crafting positively predicts career satisfaction.
Multi-source, multi-wave survey of 287 employee–leader dyads in China using the newly developed AI Job Crafting Scale; statistical analysis linking employee-reported AI approach job crafting to career satisfaction (proximal professional indicator).
The authors call for shifting evaluation and assurance from tool qualification toward workflow qualification to achieve trustworthy Physical AI.
Normative recommendation based on the paper's theoretical analysis (policy/recommendation; no empirical sample reported).
The paper derives non-degradation conditions that characterize shadow-resistant workflows for AI-assisted safety analysis.
Analytic derivations and formal criteria presented in the paper (theoretical result; no empirical validation/sample size reported).
The paper formalizes four canonical human–AI collaboration structures and derives closed-form performance bounds for them.
Theoretical/mathematical derivations and models in the paper (no empirical verification/sample size reported).
A five-dimensional competence framework captures safety competence via domain knowledge, standards expertise, operational experience, contextual understanding, and judgment.
Theoretical contribution: paper defines and formalizes a five-dimension framework (no empirical validation/sample size reported).
To facilitate adoption of our evaluation framework, we detail our testing protocols and make relevant materials publicly available.
Statement in paper that testing protocols and materials are documented and released publicly (paper claims to provide materials).
We assess an AI model with 10,101 participants spanning interactions in three AI use domains (public policy, finance, and health) and three locales (US, UK, and India).
Reported sample size and study design details stated in abstract: N = 10,101; three domains and three locales specified.
This paper introduces a framework for evaluating harmful AI manipulation via context-specific human-AI interaction studies.
Paper describes a proposed evaluation framework (methodological contribution); claimed in abstract/introduction as new contribution. No numeric sample required for the claim itself.