Evidence (4049 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Governance
Remove filter
Destinations that invest in trustworthy AI ecosystems and credible sustainability narratives can capture greater market share, increasing competitive pressure among destinations and platforms.
Conceptual market-structure argument and literature synthesis; illustrated with Kebumen as an emergent destination example; no empirical testing offered.
AI personalization can increase demand by improving match quality between tourists and offerings, raising consumer surplus and potentially willingness-to-pay.
Theoretical economic reasoning in the AI economics section of the paper; no empirical estimates or data provided.
These effects operate largely through consumer trust in technology (digital trust) as a mediator, with destination image serving as an additional mediator between trust and behavioral intention.
Theoretical mediation model proposed in the paper based on sustainable marketing theory and prior literature; illustrated via case discussion; no empirical testing reported.
Digital experience quality, AI-driven personalization, sustainability communication, and social proof jointly shape destination image and tourists’ visit intention.
Conceptual integrative framework and literature synthesis presented in the paper; illustrated using Kebumen UNESCO Global Geopark as a case example; no primary empirical data collected.
Public funding for open models, shared compute infrastructures, and curated public datasets could counteract concentration and promote broad innovation.
Paper advocates this in 'Policy and public‑goods considerations' as a prescriptive policy option; it is a proposed mitigation rather than an empirically tested intervention in the text.
Demand for security engineers, privacy specialists, human moderators, and behavioral scientists will rise, increasing wages in these specialties and altering labor allocations in AI/VR firms.
Authors' labor‑market inference drawn from increased needs implied by TVR‑Sec implementation and literature on moderation/security demand; no labor‑market data or forecasts provided.
Platforms that credibly offer strong privacy and socio‑behavioral protections may capture user trust and monetization opportunities (e.g., enterprise, healthcare, education), making safety features a potential competitive differentiator.
Authors' market‑structure reasoning based on synthesized literature and economic theory; no empirical adoption or revenue data provided to validate this claim.
Harmonized international norms and transparency measures would reduce transaction costs, limit market fragmentation, and lower the likelihood of destabilizing arms‑race dynamics, thereby improving the environment for cross‑border investment and trade in AI.
Authors' normative/economic argumentation based on comparative findings; proposed as a policy implication rather than an empirically validated result.
Aligning domestic rules with international risk‑mitigation norms, increasing transparency in defence procurement/AI operations, and strengthening multilateral confidence measures would reduce escalation and abuse.
Authors' policy argumentation and normative reasoning based on comparative findings (not empirically tested in the paper).
Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends).
Normative and conceptual claim from the workshop's economics discussion and design provocations; not empirically evaluated within the workshop summary.
We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage).
Workshop recommendation emerging from co-design exercises, futures scenarios, and position papers; presented as a normative/design agenda rather than an empirically tested intervention.
Policy instruments such as open-data mandates, compute-sharing incentives, and conditionality in R&D funding can help ensure equitable validation and local engagement in climate-AI development.
Policy recommendations grounded in normative analysis and analogies to existing public-good interventions; no empirical evaluation of these specific instruments provided in the paper.
Economists should prioritize research to quantify returns to investments in CDPI versus private compute, estimate economic costs of maladaptation from biased AI outputs, and design incentive-compatible mechanisms for data sharing and co-production.
Research agenda and recommendations presented by the authors; this is a suggested empirical/theoretical program rather than a tested result.
Establishing Climate Digital Public Infrastructure (CDPI)—shared, interoperable data and compute resources, standards, and governance—can democratize access and reduce inequities in climate-AI.
Policy proposal and normative argument drawing analogies to public goods (observational networks, satellites); no empirical evaluation of CDPI implementations presented.
Shifting from a model-centric to a data-centric approach (improving data quality, representativeness, and governance) will mitigate the harms caused by current infrastructural asymmetries.
Normative recommendation grounded in conceptual arguments and illustrative examples; not supported by empirical interventions or randomized/controlled comparisons in the paper.
Policy and governance should preserve worker agency (participatory design, transparency, clear accountability) and support training and institutional mechanisms (collective bargaining, workplace representation) to negotiate value-sharing from AI productivity gains.
Normative policy recommendation by authors derived from qualitative findings (workshops with 15 UX designers) that highlighted agency and distributional concerns.
THETA outputs can be used to create domain-tailored textual covariates (e.g., narrative indices, topic intensity) for regressions or forecasting, provided researchers validate outputs with human coding and sensitivity checks.
Practical recommendation and implication for economists in the discussion; not an empirical claim directly tested in the reported experiments.
THETA can surface domain-specific frames, stakeholder positions, and emergent arguments from large comment corpora or filings, assisting policy and regulatory analysis.
Stated implication and example applications (regulatory comment corpora, filings); no direct case-study results or downstream policy-analytic validations included in the summary.
THETA's DAFT plus the agent workflow reduces the marginal cost of coding and classification, making large-N qualitative analysis more feasible.
Argued implication based on use of parameter-efficient LoRA and human-in-the-loop agent design; no cost analyses, time studies, or economic comparisons provided in the summary.
Operationally, platform designers should monitor dependency-graph structure as a systemic risk indicator for price volatility and provide integrator abstractions to encapsulate cross-cutting complexity.
Practical implication drawn from simulation findings (not a direct empirical test on production systems): hybrid integrator results and topology-dominance results motivate these recommendations; no real-world deployment data presented.
Clinic-aware designs and reliable validation can enable clearer evidence of value, facilitating payer reimbursement, value-based care contracts, and new pricing models for AI-enabled medical devices and services.
Policy and reimbursement implications discussed by clinicians and industry participants during the workshop and summarized in the workshop report (NSF workshop, Sept 26–27, 2024).
Scalable validation ecosystems and continuous objective measures reduce information asymmetries between developers, clinicians, and payers, lowering commercialization and regulatory risk, which raises private returns and speeds adoption.
Economic implications and causal argument set out in the workshop summary based on expert judgement and theory discussed at the NSF workshop (Sept 26–27, 2024).
Firms that can credibly supply explainability and governance may capture a premium—explainability can be a competitive differentiator and a signal of quality and lower regulatory risk.
Conceptual synthesis and market-structure arguments from the reviewed literature; reviewed studies provide theoretical and some qualitative support but not systematic market-price estimates.
Policy should incentivize transparency, auditability, standards for human–AI interfaces, workforce development, certification of teaming practices, and liability frameworks to ensure accountability and equitable outcomes.
Normative recommendation based on ethical and governance considerations synthesized in the paper; not supported by policy evaluation evidence within the paper.
Orchestrating attention and interrogation through interface and workflow design helps manage what humans and AI focus on and how they challenge/verify each other, thereby reducing errors and misuse.
Prescriptive claim grounded in human factors and HCI literature synthesized by the authors; the paper suggests these mechanisms but does not report empirical trials demonstrating effects.
Design principles (define goals/constraints, partition roles, orchestrate attention/interrogation, build knowledge infrastructures, continuous training/evaluation) are necessary design levers to build high-performing, transparent, trustworthy, and equitable Human–AI teams.
Prescriptive synthesis from reviewed literatures and conceptual modeling; these principles are proposed heuristics rather than empirically validated interventions in the paper.
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
Risk management can accelerate AI adoption by lowering uncertainty for managers and investors, thereby affecting diffusion and productivity gains from AI.
Conceptual implication derived from the review's synthesis and discussion (policy/implication section); not supported by primary empirical testing within the reviewed literature.
Firms that adopt structured risk management for AI projects can reduce model failure, operational losses, and reputational costs—improving risk-adjusted returns on AI investment.
Theoretical and practical extrapolation from general RM frameworks and thematic findings in the literature; no AI-specific primary empirical studies included in the review.
Structured risk management can produce potential cost savings via reduced loss events and more efficient capital allocation.
Reported as a benefit across some reviewed studies and practitioner reports; the review notes lack of primary empirical quantification of effect sizes.
Firms that design processes to preserve human diversity and elicit diverse AI outputs may capture greater productivity gains, increasing returns to organizational capability rather than to raw model access.
Theoretical implication and prescriptive recommendation based on observed homogenization; no direct causal firm-level evidence presented, inference based on economic reasoning.
Procurement contracts for AI systems can require staged validation (pilot, local fine-tuning) and performance-linked payments to align incentives and reduce adoption risk.
Policy recommendation drawn from procurement and incentive-design literature synthesized in the review; not an empirical claim about observed outcomes but a proposed intervention to mitigate identified risks.
Clear regulatory standards for synthetic data quality, provenance, and acceptable validation pipelines will lower transaction costs, reduce liability risk, and stimulate private-sector offerings (synthetic-data services, marketplaces).
Policy and governance analyses in the review arguing that regulatory clarity reduces uncertainty and promotes market activity; this is a policy inference supported by comparative regulatory studies rather than direct causal empirical proof specific to African markets.
AI contributes to flatter, more networked and modular organizational forms, with increased cross-functional coordination enabled by shared data platforms and real-time analytics.
Conceptual reasoning supported by cross-sector illustrative examples; no standardized cross-firm comparative empirical study reported in the book.
Model and platform providers may capture significant rents through APIs and integrated developer tooling.
Market-structure analysis and observations of current platform monetization strategies; speculative projection based on platform economics.
Skill premiums may shift toward workers who can effectively collaborate with AI (prompting, verification, security auditing).
Theoretical and early observational studies suggesting complementary skills add value; limited empirical wage/earnings evidence to date.
Computer science curricula should emphasize computational thinking, debugging skills, and verification practices rather than rote coding alone.
Educational implications drawn from studies of learning with LLMs, risks of shallow learning, and expert recommendations; primarily normative and prescriptive rather than experimental proof.
White-box audits (inspecting model internals, logs, provenance) can detect evasion and recalibrate norms when triggered by anomalies or high-value activity.
Proposed legal and technical audit procedures discussed in the paper; authors do not present audit results or case studies.
Norm-based tax rates derived from observable usage characteristics can reduce gaming and simplify compliance.
Normative argument and proposal in the paper recommending standardized tax schedules; no empirical evaluation or calibration.
Dynamic oversight regimes (ongoing audits, continuous certification) are likely more effective than one-time approvals for managing risks from agentic AI.
Policy and governance argument based on the dynamic nature of agentic systems; presented as a recommendation rather than empirically validated.
Firms will place greater value on alignment-as-a-service, monitoring platforms, and certification/assurance products as agentic systems proliferate.
Market-structure and demand reasoning from the paper; proposed as an implication rather than empirically demonstrated.
DAR-capable systems that credibly implement transparent registers and controlled reversibility may face lower adoption frictions in high-stakes sectors, affecting market dynamics and insurer/purchaser willingness to pay.
Economics-oriented implication and conjecture in the paper about adoption dynamics and market effects; not empirically tested in the manuscript.
Demand will increase for complementary goods: orchestration platforms, testing/verification tools, secure code-generation services, and team-level integrations.
Projected market implication based on practitioner-identified frictions (quality, security, integration) in the Netlight study; speculative market prediction without market data.
The need to orchestrate AI ensembles increases demand for skills in system design, AI-tooling, and coordination rather than only coding.
Authors' inference based on observed practitioner emphasis on supervision and integration tasks in the Netlight qualitative study; no labor market data provided.
First-mover and scale advantages are likely for firms that successfully integrate AI with robust oversight, potentially creating durable cost and service-quality advantages.
Theoretical and strategic analyses aggregated in the review; this is inferential and not supported by longitudinal competitive empirical studies within this paper.
Platforms combining high-volume generation with effective filtering/curation can create strong network effects and concentration in markets for AI-assisted ideation.
Market-structure reasoning and illustrative platform examples from the literature; no empirical market-wide causal studies reported in the review.
Firms that embed AI into collaborative workflows and invest in human curation may capture disproportionate returns (first-mover and scale advantages).
Theoretical/strategic argument supported by some applied case evidence and platform-market reasoning cited in the synthesis; the review notes absence of systematic causal firm-level evidence.
Generative AI will create complementarity: increasing returns to skills in evaluation, curation, synthesis, and domain expertise that integrate AI outputs.
Theoretical labor-economics reasoning supported by case studies and task-level studies showing demand for evaluation/curation skills in AI-assisted workflows; direct causal evidence on wage effects is limited in the reviewed literature.
Lowered cost and time of ideation and early-stage R&D due to generative AI may accelerate innovation cycles and reduce firms' search costs.
Inference from studies reporting reduced time-to-prototype and increased ideation; this is an economic interpretation rather than directly measured long-run firm-level innovation rates in the reviewed studies.
Firms must redesign KPIs to capture trust-related externalities (accuracy, escalation rates, repeat contacts) rather than only speed and throughput to avoid perverse incentives.
Recommendation based on observed trade-offs in deployments where emphasis on speed/throughput can harm quality/trust; not supported by randomized tests in the paper.