Evidence (4049 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Governance
Remove filter
We critically compare LLM-generated rulings against 10,000 real-world court judgments from China Judgments Online (CJOL).
Dataset statement: the paper compares model outputs to a corpus of 10,000 CJOL labor dispute judgments.
We introduce a novel stress test that evaluates LLM-generated labor dispute outcomes by injecting social media sentiment as an external pressure.
Methodological description in the paper: a designed stress test where social media sentiment is used to perturb LLM outputs for labor dispute cases.
The paper treats data as a new type of production factor and endogenizes it within the production function.
Theoretical/methodological: the paper constructs a macro-level theoretical model that explicitly includes data as an endogenous input in the production function (no empirical/sample data).
In the near term, the most plausible equilibrium is bounded autonomy, in which AI agents operate as supervised co-pilots, monitoring systems, and constrained execution modules embedded within human decision processes.
Theoretical argument and forward-looking assessment by the authors based on the proposed framework and plausibility considerations; not presented as the result of a causal empirical study in the excerpt.
Economic evaluations of GLAI should account for end-to-end risk externalities (error propagation, institutional trust, rights impacts), not only short-term productivity gains.
Methodological recommendation grounded in conceptual synthesis of technical, behavioral, and legal risks; normative argument rather than empirical result.
Generative Legal AI (GLAI) systems are built on token-prediction (LLM) architectures rather than formal legal-reasoning architectures.
Conceptual and technical analysis in the paper distinguishing GLAI from other legal-tech; literature synthesis on common LLM architectures. No original empirical dataset or sample size—qualitative/technical review.
The paper's formalism shows that prompt/system messages shape distributions over possible execution paths (indirect control) but do not evaluate actual partial paths at runtime.
Formal mapping in the paper that treats prompts as shaping prior over paths; conceptual argument and illustrative examples.
Returns to AI are heterogeneous across firms; estimating treatment effects requires attention to selection, complementarities, and dynamic adoption pipelines.
Methodological argument referencing treatment-effect literature and observed firm heterogeneity; supported by conceptual examples rather than a single empirical treatment-effect estimate.
The Article translates these insights into risk-sensitive guideposts for modernizing governance of AI-enabled tools and emerging modalities, from agentic systems to blockchain-deployed smart contracts.
Prescriptive/conceptual policy guidance presented in the Article (normative recommendations; governance framework).
The Innovation Frontier traces LegalTech’s evolution from 2000s-vintage e-discovery to generative AI.
Historical/chronological analysis in the Article (literature review/history of LegalTech provided by authors).
The Legal Services Value Chain disaggregates the lifecycle of a legal matter into five distinct nodes of activity.
Model description in the Article (conceptual architecture; decomposition of legal work).
The Article develops two core organizing models: the Legal Services Value Chain and the Innovation Frontier.
Explicit claim in the Article describing conceptual/model contributions (theoretical/model-building).
This Article provides a practical framework for navigating the shifting terrain of legal innovation and AI.
Statement of purpose in the Article (conceptual contribution; framework development). No empirical validation reported in the excerpt.
There are action tools for higher-stakes tasks like financial transactions.
Observed examples of action tools in the monitored MCP repositories that perform higher-stakes functions, with financial transactions given as an explicit example in the paper.
We use O*NET mapping to identify each tool's task domain and consequentiality.
Method described in paper: mapping each tool to O*NET task domains and consequentiality using the monitored tool metadata and descriptions.
We categorise tools according to their direct impact: perception tools to access and read data, reasoning tools to analyse data or concepts, and action tools to directly modify external environments.
Methodological classification described in paper (taxonomy of tools into perception, reasoning, action); applied to monitored MCP server dataset.
The research surveys current methodologies and empirical evidence related to regulatory early-warning systems and desegregates (synthesizes) findings from empirical information.
Paper states it examines existing methodologies and empirical findings (literature review / synthesis); no scope (e.g., number of studies reviewed) given in the excerpt.
The study uses a mixed-methods approach combining qualitative insights from 1,500 semi-structured customer interviews with quantitative analysis of transaction records, loan repayment histories, and account activity.
Paper states methods explicitly in abstract: 1,500 semi-structured interviews plus quantitative analysis of transaction records, loan repayment histories, and account activity (case-study approach across three platforms).
The paper is intentionally public-safe: it omits proprietary implementation details, training recipes, thresholds, hidden-state instrumentation, deployment procedures, and confidential system design choices, and therefore the contribution is theoretical rather than operational.
Statement about the paper's scope and publication choices; directly asserted by the authors regarding omitted content and the theoretical nature of the contribution.
The paper introduces a constraint-coupled reasoning framework with four elements: bounded transition burden, path-load accumulation, dynamically evolving feasible regions, and a capability-stability coupling condition.
Descriptive/theoretical: the paper explicitly defines and enumerates these four framework elements. This is a claim about the paper's content rather than an empirical finding.
The frequency of manipulative behaviours (propensity) of an AI model is not consistently predictive of the likelihood of manipulative success (efficacy), underscoring the importance of studying these dimensions separately.
Analytic results reported in the study comparing model propensity (how often manipulative outputs are produced) with measures of success (induced belief/behavior changes), finding inconsistent or weak association.
For readers less familiar with the Bayesian and decision-theoretic language, key terms are defined in a glossary at the end of the article.
Statement about the article's structure and supporting material (presence of glossary noted in the article).
The gap between a continuously updated belief state and your frozen deployed model is 'learning debt.'
Terminology/definition introduced by the author in the article (glossary and definitional exposition).
Model retraining is usually treated as an ongoing maintenance task.
Author's descriptive claim in the article; presented as an observation about prevailing practice (no empirical sample or data reported).
Study methodology: Two online experiments were conducted via the crowdsourcing platform Prolific with sample sizes study 1: n = 325 and study 2: n = 371; participant mean age = 35 years; 55% female.
Methodological and sample description provided in the abstract.
Late disclosure of AI involvement did not improve affective engagement for AI-generated content.
Reported experimental result in the abstract from the two online studies manipulating disclosure timing (early vs. late).
The study was conducted by the Mohammed bin Rashid School of Government’s Future of Government Center, in collaboration with global AI pioneers.
Authorship and collaboration statement in the report.
The report highlights the key findings of a field study covering ten Arab countries to explore the realities and challenges of AI governance.
Report statement describing the geographic scope of the field study (explicitly: ten Arab countries).
The recommendations are based on regional research that included hundreds of leaders active in the AI domains, from the public and private sectors.
Report statement claiming participant base of the underlying research (described as 'hundreds of leaders').
The authors construct a mean-reverting jump-diffusion stochastic process model and conduct Monte Carlo simulations to evaluate hedging efficiency of the proposed futures contracts.
Methodological claim: explicit description of the mathematical model (mean-reverting jump-diffusion) and simulation method (Monte Carlo) used in the paper.
Capital income taxes, worker equity participation, universal basic income, upskilling, and Coasian bargaining cannot eliminate the excess automation.
Model-based policy counterfactuals evaluated in the paper showing these interventions fail to achieve the social optimum in the theoretical framework; no empirical sample.
Wage adjustments and free entry cannot eliminate the excess automation.
Analytical result in the model showing endogenous wage changes and free entry do not restore the socially optimal level of employment; theoretical equilibrium analysis, no empirical data.
We analyze a regional standardized sentiment database (97,719 responses).
Dataset description in the paper specifying the size of the standardized sentiment database.
We analyze a raw Fukui spending database (90,350 records).
Dataset description in the paper specifying the size of the raw Fukui spending database.
The analysis relies on partial least squares path modeling (PLS-PM) to test eight predictions linking technological perceptions, organizational factors, and adoption outcomes.
Author-stated analytical method: PLS-PM; eight predictions tested; uses the survey data described above.
The study uses cross-sectional survey data from 523 human resource professionals and hiring managers representing 184 organizations across multiple industries in the United States.
Author-stated sample description in the paper: cross-sectional survey; 523 HR professionals/hiring managers; 184 organizations; multiple industries; U.S.
The study synthesises findings from 36 peer-reviewed articles published between 2015 and 2025.
Systematic literature synthesis / review of peer-reviewed articles; sample = 36 articles (2015–2025) as stated in the paper.
We construct a multidimensional energy justice index to analyze AI’s net effects, pathways, and institutional dependencies.
Methodological statement: authors create an energy justice index (multidimensional) used as dependent variable in empirical analysis.
This study uses a panel dataset for 30 Chinese provinces from 2008 to 2022.
Statement of dataset coverage in the paper: 30 provinces, years 2008–2022 (panel data).
This study uses a mixed-method research design combining quantitative ROI modelling and cost–benefit analysis, qualitative synthesis of secondary enterprise case studies, and architectural analysis of Azure-native GenAI services.
Explicit methodological description in the abstract of the paper.
Ninety-five high-quality studies were analyzed using principal component analysis and k-means clustering.
Paper states screening produced 95 high-quality studies which were subjected to PCA and k-means clustering for analysis.
A systematic literature review of 450 records from major databases was conducted using PRISMA 2020 guidelines.
Statement in the paper describing methods: systematic literature review using PRISMA 2020; initial search returned 450 records from major databases.
Specification and implementation are available at https://github.com/chelof100/acp-framework-en
Repository URL provided in the specification text; points to the stated implementation and documentation artifacts.
The specification defines more than 62 verifiable requirements and 12 prohibited behaviors.
Quantitative claims stated in the specification about requirement and prohibited-behavior counts.
The v1.13 release includes an OpenAPI 3.1.0 specification for all HTTP endpoints.
Specification/repository statement indicating an OpenAPI 3.1.0 specification is provided for HTTP endpoints.
The v1.13 release includes 51 signed conformance test vectors (Ed25519 + SHA-256).
Repository/specification statement listing 51 signed conformance test vectors and the signature/hash algorithms used.
The v1.13 release includes a Go reference implementation of 22 packages covering all L1-L4 capabilities.
Repository statement describing a Go reference implementation comprising 22 packages and coverage claim for L1-L4.
The v1.13 specification comprises 36 technical documents organized into five conformance levels (L1-L5).
Explicit quantitative statement in the specification/repository describing document count and organization.
The paper presents a formal evolutionary taxonomy of generative AI spanning five eras (1943–present) and analyzes frontier lab dynamics, sovereign AI emergence, and post-training alignment evolution from RLHF through GRPO.
Conceptual taxonomy and historical/organizational analysis provided in the paper. No empirical sample size reported in the excerpt.
The framework extends the Sustainability Index of Han et al. (2025) from hardware-level analysis to ecosystem-level analysis.
Conceptual / methodological extension claimed by the authors referencing Han et al. (2025). No empirical sample size reported in the excerpt.