Evidence (2340 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Org Design
Remove filter
Microsoft Azure has become one of the first enterprise-scale platforms facilitating GenAI-driven change.
Statement in the paper's abstract asserting Azure's market position as an early enterprise-scale platform for GenAI.
The technology particularly benefits less experienced practitioners by providing comprehensive starting points for legal research, while experienced attorneys can use it for quality control and initial drafts.
Authors' interpretation of AI outputs from the experiment and reasoning about how those outputs map onto different practitioner needs (qualitative judgment).
The analysis reveals AI’s potential to transform law firm economics by dramatically reducing research time while maintaining analytical quality, though careful attorney oversight remains essential.
Inference from the experimental finding that four AI systems produced substantive analysis comparable to junior-associate work on one transcript and the stated observation about traditional research time (8–40 hours); authors' qualitative judgment about economic implications and need for oversight.
Statutory and regulatory citations proved generally accurate and useful.
Authors' examination of statutory and regulatory references produced by the four AI engines in the experiment, judged to be generally correct and helpful.
All four engines successfully spotted legal issues, assessed claim strengths and weaknesses, and suggested follow-up investigation—tasks that traditionally required eight to forty hours of junior attorney research time.
Observed outputs from the four AI engines on the single transcript showing issue-spotting, strengths/weaknesses assessment, and suggested follow-ups; comparison to typical junior attorney research time (stated as 8–40 hours).
Contemporary generative AI performs sophisticated legal analysis comparable to experienced associates, correctly identifying major employment law claims including ADA violations, Title VII discrimination, OSHA retaliation, FMLA interference, and workers’ compensation retaliation.
Qualitative assessment of outputs from the four AI engines applied to the single hypothetical transcript; comparison against expected legal claims (authors' judgment that outputs matched those an experienced associate would produce).
Four major generative AI engines—DeepSeek, Claude, ChatGPT, and Grok—are useful legal analysis tools for employment law practitioners.
Experimental evaluation in which a single hypothetical client interview transcript was submitted to each of the four AI systems and their outputs were assessed by the authors.
A mixed-methods empirical research agenda is presented, proposing a future PLS-SEM approach to test the mediating role of the cognitive flywheel and the moderating effect of fractal governance on organizational resilience.
Methodological proposal described in the paper (research design and proposed analytic approach); no executed empirical study or sample reported.
Fractal governance architecture is proposed to mitigate systemic vulnerabilities such as automation bias.
Conceptual proposal of a governance design in the paper; no empirical test or sample provided.
The cognitive flywheel is the central mechanism of this dynamic capability and can be operationalized (the paper operationalizes the cognitive flywheel).
Theoretical operationalization within the paper (concept definition and proposed operational measures); no empirical measurement or sample reported.
The co-evolutionary dynamic is formalized using coupled non-linear differential equations and time decay integrals.
Mathematical formalization reported in the paper (modeling methods described); no empirical parameter estimation or sample provided.
Dynamic cognitive advantage arises from the historical, recursive, structural coupling of human semantic intent and machine syntactic processing (a co-evolutionary dynamic).
Conceptual theory introduced and argued in the paper (mechanism-level proposition); formalization provided but no empirical validation.
Conceptualizing the enterprise as a complex adaptive system operating far from thermodynamic equilibrium provides a more appropriate framing for organizations integrating AI and enables the theory of dynamic cognitive advantage.
Theoretical development and conceptual argumentation within the paper; formal framing rather than empirical test; no sample reported.
Leaders' AI symbolization lessens AI's negative impact on employees' emotional exhaustion.
Moderation analysis in the four-stage longitudinal study of 285 finance professionals; leader AI symbolization tested as moderator of AI usage -> emotional exhaustion path.
Leaders' AI symbolization strengthens AI's positive effect on employees' sense of self-determination.
Moderation analysis within the same four-stage longitudinal survey of 285 finance professionals; leader AI symbolization tested as moderator of AI usage -> sense of self-determination path.
AI usage can boost innovative work behavior by enhancing employees' sense of self-determination.
Four-stage longitudinal study (survey) of finance professionals (N=285); mediation analysis testing AI usage -> sense of self-determination -> innovative work behavior, grounded in SOR theory.
Human-AI systems should be designed under a cognitive sustainability constraint so that gains in hybrid performance do not come at the cost of degradation in human expertise.
Normative recommendation in the paper based on the conceptual/mathematical framework and the identified trade-off; presented as an argument rather than empirically validated policy outcome in the excerpt.
Together, these quantities provide a low-dimensional metric space for evaluating whether human-AI systems achieve genuine synergistic performance and whether such performance is cognitively sustainable for the human component over time.
Claim about the utility of the defined metrics, supported within the paper by the conceptual/mathematical framework and the proposed metric definitions (theoretical demonstration rather than reported empirical validation in the excerpt).
The paper defines a set of operational metrics: the Cognitive Amplification Index (CAI*), the Dependency Ratio (D), the Human Reliance Index (HRI), and the Human Cognitive Drift Rate (HCDR).
Explicit listing of newly proposed operational metrics in the paper; this is a descriptive claim about the paper's content (theoretical definitions), no sample size or empirical estimation provided in the excerpt.
The paper introduces a conceptual and mathematical framework to distinguish cognitive amplification (AI improves hybrid human-AI performance while preserving human expertise) from cognitive delegation (reasoning is progressively outsourced to AI).
Explicit contribution claim in the paper (description of a conceptual and mathematical framework); evidence consists of the model and formal definitions presented in the paper (no external empirical validation reported in the excerpt).
Given these findings, policymakers should favor 'strategic forbearance'—apply existing laws rather than create new regulations that could stifle innovation and diffusion of AI.
Authors' normative policy recommendation based on their interpretation of the reviewed empirical literature (risk–benefit assessment); this is a prescriptive conclusion rather than an empirical finding, so no sample size applies.
Generative AI lowers entry costs for startups, facilitating new firm entry and product development.
Cited empirical and descriptive evidence in the literature review indicating reduced development costs and faster product prototyping enabled by AI tools; the brief does not provide a pooled sample size or a single quantitative estimate.
Generative AI significantly boosts productivity in specific tasks like coding, writing, and customer service—often by 15% to 50%.
Synthesis/review of empirical literature through 2025 (multiple empirical studies of task-level impacts, including field and lab studies and observational analyses); the brief reports aggregate reported effect ranges but does not list a single pooled sample size.
Institutional design (enforceable rules, auditable logs, human oversight on high-impact actions) is a precondition for safe delegation of real authority to LLM agents; systems should be stress-tested under governance-like constraints before assignment of real authority.
Policy recommendation derived from simulation findings that governance structure strongly influences corruption-related outcomes and that safeguards alone are not consistently sufficient; grounded in experiments and rubric-assessed outcomes across 28,112 transcript segments.
Among models operating below saturation, governance structure is a stronger driver of corruption-related outcomes than model identity.
Comparative analysis within the multi-agent governance simulations across different authority structures and model identities; outcomes aggregated and compared across regimes (based on the 28,112 transcript segments scored).
Integrity in institutional AI should be treated as a pre-deployment requirement rather than a post-deployment assumption.
Argument and recommendation based on results from multi-agent governance simulations evaluating rule-breaking and abuse; conclusions drawn from aggregate outcomes across simulated regimes and interventions (see study of 28,112 transcript segments).
The paper proposes design principles for effective, accountable, and adaptive sandboxes to contribute to debates on experimentalism in AI governance.
Stated contribution of the paper (descriptive claim about content; abstract does not list the principles or empirical testing).
Regulatory sandboxes (RSs) have emerged as a potential solution to AI regulatory challenges.
Descriptive observation and normative framing within the paper; contextual reference to the EU AI Act's treatment of sandboxes (no empirical sample reported in the abstract).
External inputs that bypass internal filtering shorten recognition delays (i.e., speed up detection of regime shifts).
Model extensions/analysis showing that when some inputs are allowed to bypass internal exclusion mechanisms, the dynamics of anchor updating detect regime changes faster; result comes from theoretical model manipulations, not empirical testing.
On the LoCoMo benchmark, the architecture achieves 74.8% overall accuracy.
Benchmark evaluation reported in the paper using the LoCoMo benchmark with a reported overall accuracy of 74.8%.
Adversarial governance compliance was 100%.
Adversarial compliance testing reported in the paper (linked to the adversarial query experiments); reported compliance = 100%.
There was zero cross-entity leakage across 500 adversarial queries.
Adversarial testing reported in the paper: 500 adversarial queries used to test cross-entity leakage; result = zero leakage.
Progressive context delivery yielded a 50% token reduction.
Reported experimental result in the controlled experiments indicating token usage reduction from progressive delivery = 50%.
Governance routing precision was 92% in the experiments.
Reported experimental metric from the controlled experiments (N=250, five content types) showing governance routing precision = 92%.
The system achieved 99.6% fact recall (with complementary dual-modality coverage) in the controlled experiments.
Reported experimental result from the controlled experiments (N=250, five content types) as stated in the paper.
Total effect of trust on brand loyalty is approximately 0.800 (total β ≈ 0.800 = direct β 0.410 + indirect β ≈ 0.390), all reported as statistically significant (p < .001 for direct effects; p = .001 for indirect).
Path coefficients reported from SEM (n = 450) and arithmetic combination of direct and indirect standardized effects as reported in the paper.
Adoption intention for AI marketing strongly predicts brand loyalty (Adoption Intention → Brand Loyalty: standardized β = 0.717, p < .001).
Cross-sectional survey (n = 450 Gen Z); SEM (SPSS AMOS); reported standardized path coefficient β = 0.717 with p < .001.
Trust in AI-driven marketing directly increases Generation Z consumers' brand loyalty (Trust → Brand Loyalty: standardized β = 0.410, p < .001).
Cross-sectional survey (n = 450 Gen Z); SEM (SPSS AMOS); reported standardized path coefficient β = 0.410 with p < .001.
Trust in AI-driven marketing has a strong positive effect on Generation Z consumers' intention to adopt AI marketing (Trust → Adoption Intention: standardized β = 0.718, p < .001).
Cross-sectional survey (n = 450 Generation Z respondents); analysis via Structural Equation Modeling (SPSS AMOS); reported standardized path coefficient β = 0.718 with p < .001.
The study's strengths include multimethod triangulation, a very large behavioral dataset (150 million interactions), and controlled simulation experiments informed by empirical observation.
Methods reported: mixed‑methods sequential design with (1) 6‑month lab ethnography (n = 23), (2) computational analysis of 150 million customer interactions, and (3) empirically grounded agent‑based simulation experiments.
The Algorithmic Canvas is an operational medium where segmentation, targeting, and positioning parameters co‑evolve through iterative human–AI collaboration.
Design and implementation described in the study; observation of Canvas‑mediated interactions during a 6‑month lab ethnography inside a Fortune 500 company (n = 23).
Autopoietic STP + Algorithmic Canvas approach is 44% more resilient to market shocks than traditional, process‑based STP (p < 0.01).
Agent‑based simulations and comparative analyses informed by empirical calibration; supported by large‑scale behavioral data (150 million customer interactions) and simulation experiments. Statistical test reported with p < 0.01. Exact number of simulation runs and full test details not specified in the summary.
Policy recommendations include standards on explainability, audit trails, certification for finance/tax AI systems, stronger data governance, and public–private coordination to update regulatory guidance.
Paper's policy and governance recommendations drawn from case findings and literature synthesis; prescriptive content rather than evaluated interventions.
Deployments should build governance, explainability, and auditability into systems and start with pilots on high-volume, well-structured tasks before scaling.
Paper recommendations based on case experience and analytic framing; advocated strategy rather than empirically validated at scale within the paper.
To mitigate risks and realize benefits, AI systems in finance/tax should combine AI with human-in-the-loop controls and clear escalation paths.
Prescriptive recommendation grounded in case lessons and literature on safe AI deployment; presented as a best-practice guideline rather than tested intervention.
Technical building blocks leveraged in these deployments include large language models (LLMs), OCR plus structured information extraction, retrieval-augmented generation (RAG) and knowledge bases, and process automation/RPA.
Explicit technical characteristics section and case descriptions in the paper identify these components as core to implementations.
Generative AI is used for risk control and audit functions, including real-time monitoring, fraud detection, KYC/AML screening, and automated exception reporting.
Reported use-cases in the two case organizations and corroborating industry reports discussed in the literature review portion of the paper.
For tax declaration, generative AI enables extraction of tax-relevant facts from invoices and contracts, drafting of tax returns, compliance checks, and scenario simulations.
Case examples and literature synthesis describing OCR + information extraction and LLM-assisted drafting workflows used in practice.
Generative AI is applied to fund management tasks such as cashflow forecasting, anomaly detection, and automated workflows for payments and collections.
Case descriptions and technical mapping in the paper showing implementations at the sharing center and professional services firm level.
Accounting automation use-cases include automated bookkeeping, reconciliations, journal entry suggestion, and error detection using LLMs and document understanding.
Detailed scope mapping and case examples in Xiaomi and Deloitte illustrating these accounting applications; supported by literature review of technical capabilities.