Evidence (4793 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Productivity
Remove filter
Technical expansion without an accompanying theory of lived temporality risks increasing capabilities while degrading the qualitative depth of human experience (presence, attentional flow, felt meaning).
Argumentative claim supported by philosophical analysis and literature synthesis (neurophenomenology, attention economics); no empirical test reported (N/A).
Differential access to higher-quality (paid) versus free GenAI tools and differing ability to engage with the tool could widen inequality among students and institutions.
Authors' implication based on student-reported concerns about limitations of free ChatGPT versions and on heterogeneous gains across disciplines; this is a policy/implication claim not directly measured in the experiment.
High-quality, equitable climate information displays public-good characteristics (nonrival, nonexcludable at scale), so private incentives alone will underprovide geographically representative data and shared infrastructure.
Economic reasoning supported by observed concentration of compute and model development (mapping) and standard public-goods theory; no formal empirical market model estimated in the paper.
Heterogeneous trust levels across firms and schools may produce uneven productivity gains and widen performance gaps.
Logical implication and policy discussion in the paper; the cross-sectional study documents relationships between trust and outcomes but does not provide aggregate diffusion or cross-firm longitudinal evidence to confirm unequal sectoral diffusion.
Overreliance on unvetted AI can propagate biases; economic gains from AI therefore require governance, auditing, and accountability mechanisms.
Framed as a risk and policy recommendation in the discussion; not an empirical finding from the cross-sectional survey reported in the summary.
Full replacement of physicians would require breakthroughs in robust generalization, embodied capabilities, and legal/regulatory change—currently lacking.
Conceptual inference based on documented limitations (OOD generalization, lack of embodied/sensorimotor capability, unsettled legal/regulatory environment) summarized in the review.
Emerging agentic/AGI capabilities introduce new failure modes and governance challenges that standard ML oversight may not cover.
Emerging literature, theoretical analyses, and expert opinion summarized in the synthesis; authors note limited empirical long-term data and characterize this as an emergent risk.
Centralized provision of high-quality coding models by a few vendors could produce vendor lock-in and increase platform power in software development inputs.
Market-structure analysis and industry observations synthesized in the paper; the claim is forward-looking and not established by longitudinal market data within the review.
If many firms adopt AI generation without matching verification, aggregate fragility in software-dependent infrastructure could rise, increasing downtime costs and systemic economic risk.
Macro-level risk projection and system fragility argument in the paper; no macroeconomic modeling or empirical scenario analysis provided.
This reversal of the burden of proof creates moral-hazard-like behavior: incentives for speed reduce verification effort.
Theoretical argument built on the micro-coercion mechanism and economic reasoning; no empirical validation provided.
Under time pressure, developers adopt an implicit default of accepting plausible machine outputs unless they can disprove them (the 'micro-coercion of speed'), effectively reversing the burden of proof.
Behavioral mechanism posited from descriptive reasoning and thought experiments; no behavioral experiments, surveys, or observational data reported.
DAR dynamics (authority states, hysteresis, safe-exit times) introduce path-dependence and switching costs that should be treated as state variables in production and decision models of human–AI joint work.
Theoretical implications section arguing these elements add path-dependence and switching costs to economic/production models; analytic reasoning, not empirical measurement.
Concentration risks exist because high fixed costs for safe integration and model adaptation may favor larger incumbents or platform providers.
Conceptual economic reasoning and practitioner commentary synthesized in the review; no empirical market-structure analysis or sample-based evidence included here.
Rich contextual memories and continuous home interaction create valuable data streams that could enable firms to capture substantial value, raising concerns about data governance, consent, and monetization.
Authors' policy and economic implications discussion noting that MMCM-like memories generate valuable data; this is a conceptual/policy claim rather than empirically tested within the study.
Imported AI systems may impose foreign values and norms, risking erosion of indigenous knowledge and social cohesion.
Normative and conceptual argument supported by cited case studies and policy analyses; no original anthropological or sociological fieldwork in the paper.
Deployed AI systems can produce algorithmic bias that harms marginalized groups when models are trained on skewed or non‑representative data.
Synthesis of prior empirical findings and case studies on algorithmic bias and fairness in ML systems; paper does not present new empirical tests.
Human reviewers may over-trust machine-generated language and explanations (automation bias), reducing the likelihood of detecting fraudulent outputs.
Reference to automation-bias literature and conceptual examples; threat modeling and illustrative vignettes in the article.
Existing internal audit and compliance frameworks focus on access, transaction, and system controls, not on content-generation integrity.
Literature and standards review combined with threat-control mapping demonstrating gaps in content/provenance coverage.
Using calibrated, employee-level predictions enables marginal-cost analyses and prioritization (micro-targeting) to improve retention-efficiency versus uniform, across-the-board policies.
Methodological argument: calibrated individual probabilities plus counterfactual impact estimates enable ranking employees by expected gain from interventions and thus marginal-cost prioritization (no empirical cost–benefit calculations provided).
Recommended research priorities include hierarchical/temporal-decomposition methods, continual learning, robust adaptation to non-stationarity, and causal/structured reasoning to handle multi-factor interactions.
Paper discussion linking observed failure modes to methodological gaps and proposing research directions to address limitations; these are recommendations rather than experimentally validated claims.
Regulators and payers will require clinical validation, safety guarantees, and clear liability frameworks for human–AI shared decision-making before widescale deployment.
Policy implication stated in the paper's discussion section based on general regulatory considerations; not an empirical result from the study.
Empirical economics research should use firm-level and pipeline microdata and quasi-experimental designs to estimate causal effects of AI adoption on outcomes like time-to-hit, preclinical attrition, IND filings, and NME approvals per R&D dollar.
Research recommendation offered in the paper based on identified gaps; not an evidence claim but an explicit methodological suggestion.
Policy does not predict individuals' intent to increase usage but functions as a marker of maturity—formalizing successful diffusion by Enthusiasts while acting as a gateway the Cautious have yet to reach.
Analysis of a policy variable within the survey dataset (N=147) showing no predictive relationship with individual intent to increase AI use, but an association between presence of policy and indicators of organizational adoption/maturity and differential reach into archetype groups.
Prospective studies are needed to evaluate AI's real-world clinical impact in acute GIB.
Authors' recommendation in the discussion and conclusion based on the predominance of retrospective evidence and few prospective/RCTs.
The study recommends iterative prompt refinement, integration with adaptive learning models, and further exploration of autonomous self-prompting mechanisms.
Concluding recommendations derived from the study's results and interpretation; presented as future directions rather than empirically tested interventions within this study.
Recommended future research includes scalable interoperability solutions, longitudinal lifecycle value validation, human‑centred adoption strategies, and sustainability assessment methods.
Authors' explicit recommendations at the end of the review based on identified gaps in the literature.
Future research priorities include obtaining causal estimates (e.g., field experiments) of productivity gains from trust-mediated AI adoption and conducting cost–benefit analyses of trust-building interventions.
Study’s stated research agenda/recommendations; not an empirical claim but a recommended direction for follow-up research.
Key research priorities include improving measurement of AI usage across countries, causal identification of long-run effects, and sectoral reskilling strategy evaluation.
Identified gaps and methodological limitations in the reviewed empirical literature (measurement heterogeneity, limited long-run panels, sectoral variation) motivating suggested future research agenda.
To measure and monitor these effects, researchers should track firm-level adoption of AI features, fulfillment automation intensity, platform-mediated market entry, and task-level labor shifts.
Author recommendations based on gaps identified in the case-based and multi-modal empirical work and the sensitivity of results to adoption measures; not an empirical finding but a methodological claim.
The threshold for taxing AI may be crossed once AI becomes sufficiently capable in substituting humans across cognitive tasks.
Model-based comparative-static/threshold analysis showing that higher AI substitutability for cognitive tasks increases the likelihood that cognitive workers will consider switching to manual jobs, thereby meeting the model's tax-initiation condition.
Economic and organizational benefits (e.g., cost-effective retention, preserved human capital for environmental innovation) are plausible outcomes of applying the approach, but require further causal and cost analyses.
Paper discusses implications and hypothesizes ROI from reduced turnover (less recruiting/onboarding/productivity loss) and preservation of green capabilities; no empirical cost or productivity data provided in the presented summary.
Firms investing in human–AI co‑creation infrastructure may gain a resilience premium; policymakers and standards bodies should consider governance frameworks for adaptive algorithmic systems balancing responsiveness with oversight.
Policy and investment implication inferred from empirical results on resilience and detection performance; direct evidence of market valuation or policy outcomes is not reported.
Greater reliance on algorithmic co‑creation shifts labor demand toward roles skilled in model oversight, interpretive judgment, and human‑machine interaction rather than purely manual segmentation tasks.
Inference from the operationalization of human–AI co‑creation via the Canvas and observed changes in practitioner workflows during 6‑month ethnography (n = 23); workforce composition effects are not empirically measured at scale in the study.
A ~90% reduction in strategic planning cycle time indicates lower managerial coordination costs and faster reallocation of marketing and R&D budgets.
Inference from measured reduction in planning cycle length (~90%) observed in the study (see ethnography/system logs); direct measures of coordination costs and budget reallocation outcomes are not reported in the summary.
Algorithmic Canvas–enabled autopoietic STP increases firms' ability to adapt endogenously to shocks, implying higher realized productivity in volatile markets and lower deadweight losses from mis‑targeting.
Inference drawn from empirical findings on resilience and detection performance (44% greater resilience, improved signal detection) and theoretical reasoning about dynamic capabilities; productivity and deadweight loss are not directly measured in the reported empirical results.
Economic evaluations of AI adoption should include psychological and human-capital externalities (effects on self-efficacy, skill depreciation, job satisfaction) to fully account for welfare and productivity dynamics.
Argument grounded in experimental and survey findings showing psychological impacts of AI-use mode; general recommendation for research and evaluation rather than an empirical finding.
Realizing net societal gains from AI requires human-centered design, regulatory and control measures, and integration of sustainability indicators into technological development.
Normative conclusion drawn from the narrative review of interdisciplinary evidence and policy recommendations; not an empirically validated claim within this paper.
If banks operationalize NLP for personalization and acquisition at scale, this could increase differentiation, raise switching costs, and potentially affect market concentration—warranting antitrust monitoring.
Theoretical implication extrapolated from identified capability gaps and economic reasoning about differentiation, switching costs, and scaling advantages; not empirically tested in the reviewed papers.
Limited applied research on NLP for acquisition and personalization implies unrealized value in banking: NLP could enable more efficient, targeted customer acquisition and cross‑sell, potentially lowering customer‑acquisition cost (CAC) and increasing lifetime value (LTV).
Inference drawn from observed topical gaps (low article counts on acquisition/personalization) and standard marketing economics linking targeting/personalization to CAC and LTV; no direct causal evidence provided in the reviewed literature.
Multilateral coordination is needed to set baseline principles (data flows, privacy, AI safety, competition rules) to reduce regulatory fragmentation.
Scenario-based reasoning and policy prescription grounded in theoretical analysis of fragmentation costs; normative recommendation rather than empirical proof.
Research and funding priorities should reweight toward symbolic/structured knowledge, verification, curricula design, and orchestration algorithms rather than exclusive emphasis on model scale.
Prescriptive recommendation based on the conceptual advantages claimed for DSS; not supported by empirical policy or funding analysis within the paper.
Smaller, verifiable DSS agents are easier to audit and align per domain, potentially reducing systemic risks associated with large opaque generalist models.
Argumentative claim about auditability and verifiability of compact, domain-specific systems versus large generalists; no empirical auditability studies are provided.
DSS reduces environmental externalities (e.g., emissions, water use) relative to continued monolithic scaling and may reduce regulatory pressure tied to those externalities.
Theoretical claim tying reduced inference energy and decentralized deployment to lower environmental impacts; the paper suggests measuring emissions and water use but supplies no empirical measurements.
Specialization enables many niche DSS providers rather than a small number of dominant monolithic providers, thereby lowering entry barriers for vertical experts.
Market-structure argument based on modularization and domain-focused offerings; no empirical market analysis or simulation is provided.
Shifting to DSS changes the cost structure of AI: it lowers recurring OPEX per user by reducing inference energy and enabling local/device processing instead of centralized, inference-heavy cloud services.
Economic reasoning and proposed modeling approaches (capex/opex comparisons) described conceptually; no empirical economic model outputs or market data are included.
DSS societies can achieve much lower inference energy per task and enable easier on-device/edge deployment compared to monolithic LLM deployments.
Argument that smaller, domain-focused models require fewer compute resources and thus lower energy and are better suited to edge hardware; empirical measurements to support this claim are proposed but not supplied.
Architecturally, replacing single giant generalists with 'societies' of small, specialized DSS models routed by orchestration agents yields operational benefits (routing to experts, modular upgrades, specialization).
Conceptual architectural proposal describing specialized back-ends and orchestration/routing agents; the paper outlines recommended experiments but reports no empirical orchestration benchmarks.
A more sustainable and effective trajectory is to build domain-specific superintelligences (DSS) grounded in explicit symbolic abstractions (knowledge graphs, ontologies, formal logic) and trained via synthetic curricula so compact models can learn robust, domain-level reasoning.
Prescriptive proposal based on theoretical arguments about the benefits of symbolic abstractions, compact model training, and synthetic curricula; no experimental validation or empirical comparison is provided in the paper.
Improved alignment can reduce harms from misinterpretation (incorrect decisions, misinformation), lowering downstream liability and reputational risk for vendors and customers.
Paper's safety and externalities discussion argues this as a likely consequence; the claim is theoretical and not supported by empirical incident data in the paper.
Providers may charge a premium for alignment-enabled API tiers or incorporate C.A.P. into enterprise plans because of additional compute per interaction, affecting pricing and unit economics.
Paper's pricing and costs discussion predicts potential monetization strategies and pricing experiments (A/B pricing, willingness-to-pay studies) but does not report market data.