Evidence (4049 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Governance
Remove filter
Policymakers face trade-offs between promoting innovation and market efficiency on one hand and protecting privacy, fairness, and national security on the other; economic analysis can inform calibration.
Normative policy analysis and synthesis of literature on digital regulation and trade-offs; supported by comparative observations of regulatory priorities across jurisdictions.
Safeguards such as audit trails, explainability, and human oversight impose additional implementation costs that must be weighed against efficiency benefits.
Normative and economic reasoning based on requirements for compliance and system design; no empirical cost estimates provided.
There is a fundamental tension between AI-driven efficiency and core administrative-law principles—discretion, due process, and accountability.
Doctrinal legal analysis of administrative-law principles in Vietnam and comparative institutional analysis of AI adoption in other systems.
The net educational value of AI-generated feedback depends on alignment with pedagogical goals, quality evaluation, integration with human teaching, and governance to manage equity, privacy, and incentives.
Synthesis statement from the meeting report produced by 50 interdisciplinary scholars; conceptual judgment rather than empirical proof.
Convergence after exemplar exposure occurred by both tightening of estimates within a measure family and by agents switching measure families.
Agent-level tracking across stages showed two patterns following exemplar exposure: (1) reduced within-family dispersion (tighter estimates) and (2) categorical switches in measure selection by some agents, as recorded across the 150-agent sample.
LLMs excel at extracting and generating arguments from unstructured text but are opaque and hard to evaluate or trust.
Synthesis of recent LLM literature and observed properties (generation capability vs. opacity); no empirical evaluation within this paper.
The paper is primarily theoretical and historical; empirical validation is needed to quantify the irreducible component of LLM value, and practical degrees of rule‑extractability may exist even if some capabilities remain tacit.
Stated limitations section acknowledging the theoretical nature of the work and the need for empirical follow‑up.
If an LLM's full capability were reducible to an explicit rule set, that rule set would be an expert system; because expert systems are empirically and historically weaker than LLMs, this leads to a contradiction (supporting non‑rule‑encodability).
Logical proof‑by‑contradiction presented in the paper, supported by conceptual mapping between rule sets and expert systems and qualitative historical comparisons.
The paper's proposed ISB+NDMS approach is tailored to the Russian institutional context (leveraging historical planning experience) and its transferability to other political-economic systems is uncertain.
Comparative/transferability claim based on institutional analysis and normative reasoning in the paper; no cross-country empirical comparisons provided.
Net gains from AI are not automatic nor evenly distributed; benefits depend on translation rates to clinical success and on addressing non-technical enablers.
Synthesis and conditional argument informed by sector observations; not backed by empirical distributional analysis in the paper.
Alignment with evolving regulatory expectations (evidence standards, auditing, liability) is necessary to translate AI capabilities into products and reduce adoption risk.
Policy-focused argument referencing regulatory uncertainty; no empirical measures of regulatory impact included.
Realized, sustained impact ('democratized discovery') from AI depends on non-technological enablers: high-quality interoperable data, rigorous validation, transparency/auditability, workforce upskilling, ethical oversight, and regulatory alignment.
Synthesis and prescriptive argument in editorial grounded in observed constraints; no empirical testing of causal dependence provided.
Reward mechanisms reviewed include up-front token sales, milestone-triggered payouts, bounties, and royalties/licensing revenue distribution.
Synthesis of literature and case-study descriptions documenting available reward/payment mechanisms used by DAOs in decentralized science contexts.
Decision models in DAO governance include token-weighted voting, quadratic voting, reputation/stake-based delegation, and multisig/DAO councils for off-chain execution.
Theoretical review of governance mechanisms and survey of existing DAO practices as reported in secondary sources and project documentation.
The review synthesizes cross-domain evidence on the use of AI across the continuum from target identification to regulatory integration and critically evaluates existing limitations including data bias, interpretability discrepancy, and regulatory ambiguity.
Statement about the scope and content of the review (literature synthesis and critical evaluation). This is a description of the paper's methods/content rather than an empirical finding; the excerpt indicates these topics are discussed.
The research methodology combines systemic analysis, comparative assessment of international practices, and analytical generalization of organizational learning models, enabling capture of both structural trends and concrete institutional responses to technological changes.
Methodological statement from the paper describing its approach; this is a factual claim about methods used rather than an empirical finding.
The study investigates the benefits and drawbacks associated with the incorporation of innovative artificial intelligence technologies into industrial policies.
Author-stated research objective reported in the text; evidence claimed to come from literature review (novel studies and existing literature), but no specific studies, sample sizes, or empirical measures are provided in the excerpt.
The paper constructs three policy-contingent labor market scenarios for 2025–2035: (1) an Augmented Services Economy with inclusive productivity gains, (2) a Dual-Speed Labor Market characterized by polarization and uneven adjustment, and (3) a Disruptive Automation Shock involving significant displacement and social strain.
Prognostic, scenario-based approach integrating the three evidence bases (task-level capability mapping, occupational exposure/complementarity analysis, and firm- and worker-level adoption evidence). The scenarios are developed and described in the paper for the 2025–2035 horizon.
Helicoid dynamics is a specific failure regime: a system engages competently, drifts into error, accurately names what went wrong, then reproduces the same pattern at a higher level of sophistication, recognizing it is looping and continuing nonetheless.
Definition introduced in the paper and illustrated by the reported case series; the claim is conceptual/phenomenological rather than a statistical result.
The review synthesizes findings across five thematic areas: AI‑driven task automation and decision support; digital literacy and capacity building; gender‑sensitive employment patterns; infrastructural and policy challenges; and sustainable development outcomes.
Thematic synthesis of the 55 included articles as described in the paper; themes explicitly listed by the authors.
Prevalence and risk factors for poverty differ by gender, as does the nature of vulnerability.
Stated as a general empirical claim in the introduction, drawing on broader literature (no specific study, method, or sample size provided in the excerpt).
Major actors such as the United States, China, and the European Union pursue distinct models of AI development and regulation.
Comparative policy analysis and qualitative document review of national/regional AI strategies and regulatory proposals for the United States, China, and the EU (specific documents and sample size not specified).
The study identifies the emergence of three competing governance paradigms: the innovation-driven liberal model, the ethics-oriented regulatory model, and the state-controlled authoritarian model.
Finding from the paper's comparative policy analysis and qualitative review of policy documents across major actors (United States, European Union, China); underlying document sources referenced qualitatively but not enumerated as a quantitative sample.
There is substantial heterogeneity in worker experiences within platform-mediated gig work.
Observed variation in roles (primary vs. supplementary), earnings distribution (median below traditional but top-decile premiums), and access to benefits across the 24-country dataset from surveys, administrative records, and platform transaction data.
About 65% of gig workers engage in platform work as supplementary income alongside traditional employment or education.
Self-reported employment status and activity overlap from labor force surveys and administrative linkages in the 24-country dataset.
Artificial intelligence (AI) has a positive and statistically significant effect on growth at lower conditional quantiles (τ = 0.10–0.25) but is insignificant at higher quantiles.
MMQR estimation results reported in the paper showing significant positive AI coefficients at τ = 0.10–0.25 and insignificant coefficients at higher quantiles.
Institutional factors (education systems, active labor market policies, mobility, industrial policy, social protection) shape net employment outcomes from AI.
Theoretical and policy-focused synthesis; cross-country comparisons in literature highlight institutional mediation though no single new cross-country empirical estimate is provided.
Net employment effects depend on the balance of substitution and complementarity, sectoral exposure, and institutional responses.
Conceptual labor-economics framework (task-based, skill-biased change) and comparative review of cross-country/sectoral evidence emphasizing institutional mediation.
AI will substantially restructure labor markets.
Task-based theoretical approach and cross-sectoral synthesis of empirical studies showing task substitution and complementarity effects across occupations and sectors.
Scholarly production, institutional incentives, funding, and the Cold War geopolitical context shaped which economic theories became prominent.
Historical institutional case study drawing on archives, correspondence, publication records, and contemporaneous debates to link institutional and funding environments to intellectual trajectories.
Long-run integration (degree of long-run association) between core AI and AI-enhanced robotics differs systematically across national innovation systems.
Country-level decomposition of patent filing series and time-series econometric tests for long-run relationships / cointegration between core AI and AI-enhanced robotics patent series for each country/region (China, U.S., Europe, Japan, South Korea).
Core AI, traditional robotics, and AI-enhanced robotics follow distinct historical trajectories over 1980–2019 and do not move together uniformly.
Time-series analysis using annual patent filing counts (1980–2019) for each domain; tests for common long-run relationships / co-movement across the three patent series (as reported in the paper). Country-aggregated and domain-specific patent time series were analyzed; exact sample size (total patents) not specified in the summary.
Kondratieff, Schumpeter, and Mandel each highlight different drivers of capitalist long waves: Kondratieff emphasizes regular technological-driven renewal, Schumpeter emphasizes entrepreneurship and innovation-led creative destruction, and Mandel emphasizes class relations and production structures.
Comparative theoretical analysis and literature synthesis across the three schools; conceptual summary of canonical positions (no original dataset; qualitative interpretation).
XChronos reframes transhumanist technology evaluation in experiential terms, creating both market opportunities and measurement/regulatory challenges for AI economics.
Synthesis and concluding argument in the paper summarizing proposed implications; conceptual reasoning without empirical tests.
Across 182 reviewed studies, LLM-generated synthetic participants have modest and inconsistent fidelity to human participants.
Systematic review and synthesis of 182 empirical and methodological studies comparing LLM-generated participants to human samples; studies were coded and analyzed for fidelity outcomes.
Human factors (training, trust calibration, workflows) determine whether clinicians accept, override, or ignore GenAI suggestions.
Qualitative and quantitative human-AI interaction studies and pilot deployments discussed in the paper; specific sample sizes and effect sizes are not reported in the paper.
Safety and net benefit of GenAI CDS hinge on deployment details: user interface, real-time feedback, uncertainty quantification, calibration, and how recommendations are presented (strong vs. suggestive).
Human factors and implementation studies referenced; early A/B tests and human-AI interaction research suggest interface and presentation affect acceptance and error rates; no large-scale standardized implementation trial data cited.
Reimbursement models (fee-for-service vs. capitation) will influence whether cost savings from GenAI are realized or offset by increased service volume.
Economic incentive framework and prior health-economics literature cited; the paper does not provide direct empirical tests but references plausible incentive channels.
RL and adaptive methods are good for real-time adaptation but can be myopic, require large amounts of interaction data, and struggle to incorporate long-term preference structure and ethical constraints.
Surveyed properties of reinforcement learning and adaptive methods in HRI/RS literature; no new empirical evaluation in this paper.
Key tradeoffs in contemporary financing models include speed/flexibility versus regulatory coverage and long‑term cost, and data reliance versus privacy/fairness.
Multi‑criteria comparative evaluation and conceptual analysis across financing models; synthesis draws on regulatory context and observed product features rather than primary quantitative tradeoff estimation.
Performance of structure prediction models scales with data, model size, and compute; there are tradeoffs between accuracy and inference speed/simplicity.
Paper explicitly states scaling behavior and tradeoffs in 'Compute and training' and 'Representative models' sections; no precise scaling curves or thresholds are provided in the text.
The United States' decentralized education system produces tensions between local innovation and federal accountability, with active debates over data and privacy laws shaping responses to AI in assessment.
Case study of U.S. policy and secondary literature documenting federal-state-local governance dynamics and ongoing legal/policy debates; descriptive evidence from public documents.
China's centralized control enables rapid piloting of AI-supported assessment but raises concerns over surveillance and data governance.
Country case study using Chinese policy texts and secondary analyses describing centralized education governance and data-governance practices; illustrative rather than empirical.
India faces pressure to maintain high-stakes exams amid uneven digital access and is experimenting with blended formative tools.
Country-specific case study based on policy documents and secondary literature describing India's exam system and early technology initiatives; no primary survey/sample size.
Four national case studies (India, China, the United States, Canada) illustrate diverse national responses to AI in assessment shaped by governance structures, resource constraints, cultural attitudes, and political pressures.
Cross-national comparative analysis using publicly available policy texts, recent reforms, and secondary literature for each country; descriptive, illustrative cases rather than exhaustive or representative samples.
Important tradeoffs exist (privacy vs. utility; centralized vs. federated data architectures; automated moderation vs. freedom of expression; cost/complexity of secure hardware) that must be balanced in VR security design.
Comparative evaluation across the reviewed corpus (31 studies) identifying recurring ethical and technical tradeoffs; authors discuss these qualitatively.
Across the EU, Algeria, and Pakistan there is convergent recognition of dual‑use risks, increasing use of export controls, and interest in developing domestic AI capacity.
Cross‑jurisdictional synthesis of national/supranational legal texts, export‑control policies, and policy documents showing discussion of dual‑use issues and capacity building.
The community knowledge functions both as practical how-to guidance and as collective experimentation with platform rules and revenue mechanisms.
Observed dual nature in the 377-video corpus: instructional workflows alongside demonstrations/testing of platform-tailored monetization tactics and workarounds.
Typical practices emphasized by creators include rapid mass production of content, productizing prompt engineering, repurposing existing material via synthesis/localization, and packaging AI outputs as sellable creative services or assets.
Recurring practices surfaced through qualitative coding of workflows, tools, and pipelines described in the 377 videos.
Across the 377 videos, creators converge on a set of repeatable use cases and platform‑tailored monetization tactics.
Thematic coding of 377 videos produced a catalog of recurring use cases and tactics; the paper reports convergence across that sample.