Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Human cognitive learning processes (calibration, error-correction) may misalign with agentic AIs because humans and AIs learn from different signals and on different horizons.
Conceptual argument supported by cross-disciplinary literature synthesis; empirical tests are proposed but not conducted in the paper.
Relational interaction mechanisms (trust, norms, mutual adjustment) can break down when AI objectives diverge or are opaque, reducing effective teaming.
Argument drawing on human factors and HAT literature; no new experimental data presented.
Agreement on bounded outputs (specifications, short-term goals) is insufficient for maintaining alignment with agentic AI.
Theoretical critique of specification-based alignment approaches; literature on limits of bounded specifications applied to open-ended systems.
Agentic AI undermines key assumptions that shared awareness will reliably stabilize coordinated action over time.
Theoretical argument showing mismatches in representation, timescales, and learning dynamics between humans and agentic AIs; drawn from literature synthesis rather than empirical tests.
Under agentic conditions, alignment cannot be treated as a one-time agreement over bounded outputs; it must be continuously sustained as plans and priorities evolve.
Conceptual argument and modeling in the paper; literature synthesis highlighting limits of specification-based alignment approaches; no empirical validation presented.
Agentic AI creates a new kind of structural uncertainty for human–AI teaming (HAT).
Theoretical/conceptual synthesis across literature on HAT, Team Situation Awareness (Team SA), human factors, multi-agent systems, and AI alignment; no new empirical data.
Regulators can operationalize 'human oversight' through auditable handover architectures like DAR, but this will increase compliance and record-keeping costs for firms and public bodies.
Policy implication argued in the paper: coupling Reversal Register and hysteresis parameters to regulatory enforcement; no empirical cost estimates provided.
Current AI tooling often mismatches existing team workflows and CI/CD pipelines, reducing seamless adoption.
Qualitative observations and practitioner reports from the Netlight study describing tooling and workflow frictions; specific integrations or lack thereof discussed but not quantitatively evaluated.
Generated code can introduce security vulnerabilities and licensing/IP ambiguity, raising quality, security, and IP concerns.
Practitioner concerns and examples documented in interviews and observations at Netlight; paper cites security and IP uncertainty as recurring themes; no systematic security scans or legal analyses reported.
Compliance with GDPR/CCPA and auditing for bias/harms imposes non-trivial technical and legal costs; implementing federated learning and DP increases engineering complexity and compute cost.
Paper's policy and cost discussion; cites increased engineering complexity and compute demands for privacy-preserving deployments but does not present quantified cost estimates.
Firms need complementary investments (data pipelines, monitoring tools, feedback loops, human oversight systems) which materially affect the economics of adoption.
Industry case studies and practitioner reports synthesized in the review describing necessary complementary investments; no quantified investment sample or ROI analysis provided here.
Regulatory attention is likely to focus on transparency, liability for factual errors, data privacy, and nondiscrimination; compliance and auditing will add to adoption costs.
Policy and regulatory analyses aggregated in the review and references to ongoing regulatory discussions; no primary regulatory impact study conducted in this paper.
Generative AI currently lacks genuine empathy and relational capabilities necessary for high-stakes or sensitive interactions.
Conceptual analyses and practitioner case examples aggregated in the review; limited direct quantitative measurement cited in this brief review.
Generative models exhibit contextual misunderstandings and cannot reliably infer nuanced customer intent in all cases.
Synthesis of empirical studies and practitioner observations documenting misinterpretation and intent-detection failures; no new testing reported in this review.
There is substitution risk: routine ideation and drafting tasks may be automated, altering task-level labor demand and wage structure.
Task-automation literature and empirical studies of LLMs performing routine drafting/ideation tasks summarized in the review; no long-run labor-market causality established in the paper.
Generative AI lacks reliable situational judgment on ambiguous problems and on ethical trade-offs, making it insufficient for autonomous decision-making in such contexts.
Case examples and experimental studies cited in the synthesis showing inconsistent or inappropriate responses to ambiguous/ethical scenarios; no large-scale causal evidence provided.
LLMs are prone to bias, mediocrity, and factual or logical errors when domain-specific context or experiential knowledge is absent.
Review of empirical evaluations documenting biased outputs, superficial or mediocre suggestions, and factual errors in open-ended tasks and domain-specific prompts; evidence comes from multiple short-term studies and applied examples.
LLMs are predominantly recombinative — they tend to rework and recombine existing material rather than produce deeply novel insights.
Analytical synthesis of output analyses and creativity assessments from multiple empirical studies demonstrating frequent recombination of existing concepts and lower rates of highly original novelty; studies and measures vary.
Proliferation of low-quality or biased AI-generated ideas creates externalities: increased filtering and reputational costs for firms and risks of poor product designs, ethical lapses, or regulatory violations if evaluation is insufficient.
Case studies and qualitative reports documenting filtering burdens and instances of biased/misleading outputs; theoretical reasoning about reputational and regulatory risks; direct quantification of these externalities is limited.
Standard productivity metrics (e.g., TFP) may undercount the value of ideation and creative augmentation provided by generative AI, making attribution between human and AI contributions difficult.
Methodological discussion in the review supported by heterogeneity in outcome measures across studies and challenges in measuring implemented idea quality and long-run impacts.
Generative models exhibit recombination bias: they tend to remix existing patterns rather than produce deeply original, paradigm-shifting insights.
Synthesis of output analyses across studies showing frequent recombination of known patterns and limited evidence of wholly novel, paradigm-changing ideas; claim based on qualitative and comparative analyses in reviewed literature.
Integration complexity (data access, context continuity, privacy/security, workflow alignment) raises implementation costs and time-to-value.
Deployment case studies and vendor reports documenting engineering effort, data plumbing, compliance work, and multi-month integration timelines; no aggregated cost meta-analysis provided.
Lack of genuine empathy and emotional intelligence undermines performance on complex or emotionally charged interactions.
Qualitative assessments and noisy measurement from pilot studies and customer feedback in complex cases; limited experimental validation and heterogeneous metrics.
Time/resource costs for re-running analyses and lack of computational environment capture (e.g., Docker/conda containers) increase the difficulty of reproducing results.
Empirical notes from reproduction attempts about compute/time burdens and survey/interview responses highlighting absence of containerized or captured environments as an obstacle.
Environment and dependency issues (library versions, platform differences) are common reproducibility problems.
Failures in running analysis code attributed to dependency/version mismatches and authors' reports; discussion of lack of environment capture (containers/notebooks) as a contributing factor.
Unspecified preprocessing steps, parameter settings, or random seeds often prevent exact reproduction of reported results.
Reproduction attempts where outputs differed due to undocumented preprocessing/parameters and corroborating survey/interview accounts from original authors.
Incomplete, non-runnable, or poorly documented analysis code is a frequent obstacle to reproducibility.
Empirical attempts to run shared analysis artifacts (scripts, code) and authors' self-reports from surveys/interviews identifying code quality and documentation problems.
A common barrier to reproducing results is missing or incomplete data, or data not accessible in the exact form used in the paper.
Observed failure modes from empirical reproduction attempts combined with survey and interview responses from paper authors reporting data availability and completeness issues.
AI illiteracy (lack of understanding of AI capabilities/limits) impedes adoption and appropriate use of AI tools in finance.
Survey and interview data reporting lower adoption/intended use among respondents with limited self-reported AI understanding; supplemented by qualitative explanations; sample described as finance professionals across multinational institutions (size unspecified).
Excessive reliance on algorithmic suggestions can erode human judgment and create systemic risks.
Interview reports and, where available, operational/risk metrics indicating overreliance patterns; authors note systemic-risk implications based on combined qualitative and quantitative observations (no causal identification reported).
Cognitive biases and inappropriate trust (both overtrust and distrust) distort decision outcomes and limit the benefits of AI-assisted decision-making.
Qualitative interview evidence describing instances of cognitive bias and misplaced trust; some quantitative indicators of decision distortion and risk where operational performance/risk metrics were available; sample: finance professionals across multinational institutions (detailed metrics not specified).
Market dominance by global platforms can stifle local entrants and distort competition; policies should address market power and data monopolies.
Review of platform economics and competition policy literature; policy argumentation rather than new empirical competition analysis in this paper.
If local data ownership, capacity and governance are weak, economic gains from AI risk accruing to foreign firms and exacerbating income and wealth concentration.
Conceptual synthesis referencing empirical studies on platform rents and data monetization; no original economic distribution analysis presented.
AI and automation can displace labour—particularly routine tasks—heightening the need for retraining, active labour policies and social protection.
Review of literature on automation and labour markets combined with normative inference for African contexts; no primary labour market data presented.
AI adoption raises a risk of digital colonialism: foreign control of data, platforms, and value capture may divert economic gains away from local actors.
Conceptual analysis drawing on policy documents and empirical literature about data flows, platform economics, and international investment; no original quantitative measurement in this paper.
Increased monitoring and algorithmic management raise concerns about worker autonomy and privacy and will prompt regulatory responses (data protection, algorithmic transparency) that shape adoption costs and trajectories.
Recurring concerns reported across included studies and the review's policy implication section; grounded in qualitative and normative discussions within the literature.
Over-standardisation of curricula can create mismatches between certified competencies and firm-specific needs.
Stated in Risks: the paper warns that overly standardized curricula may not fit firm-specific requirements. This is a conceptual caution, not supported by within-paper empirical comparisons.
High fixed costs may concentrate training capacity among a few providers, risking reduced competition.
Listed under Risks to Watch: the paper warns that high fixed costs could concentrate capacity. This is a theoretical market-concentration risk; no empirical market analysis is provided.
Upfront and maintenance costs are substantial; economic evaluation should compare these costs to downstream benefits such as placement rates and productivity gains.
Paper recommends economic evaluation, lists cost-per-curriculum and other cost metrics; presented as advice rather than results. No empirical cost–benefit data provided.
Complexity and lock-in to specific standards may raise barriers to innovation and increase switching costs.
Discussed in Regulation and compliance economics and Risks: claims that standardisation and embedded processes could produce vendor/standard lock-in. This is a theoretical risk flagged by the authors, not supported by empirical data in the paper.
Biased training data or objective functions in AI models could perpetuate gender disparities by offering different products or risk scores to men and women.
Review of AI fairness literature and examples of algorithmic disparate impacts summarized in the paper (conceptual and case evidence; not an empirical test tied specifically to fintech products in the review).
Firms will need to invest in new control technologies, governance structures, and personnel (AI auditors, red teams), increasing the total cost of GenAI adoption.
Economic reasoning and implications section; no empirical cost estimates or survey data; projection based on anticipated control needs.
Malicious insiders, external actors (vendors, consultants, customers), shadow AI (unsanctioned consumer-grade GenAI use), and supply-chain/third-party prompt templates are plausible attack vectors for prompt fraud.
Threat taxonomy and scenario mapping with case-style examples; conceptual identification of actors rather than documented incident attribution.
Poor logging, weak prompt governance, and over-reliance on machine-generated artifacts increase organizational vulnerability to prompt fraud.
Control gap analysis and prescriptive argumentation; examples of weak controls used to illustrate exploitability; no empirical measurement of effect sizes.
Because prompt fraud operates at the linguistic/procedural surface rather than the network/technical surface, existing control frameworks are ill-prepared to address this new attack surface.
Control gap analysis comparing conventional internal controls to the linguistic attack surface; conceptual rather than empirical evaluation.
Upfront governance costs (policy, tooling, staff) become a key part of adoption cost and affect ROI calculations and payback periods for automation investments.
Economic reasoning and implications discussed in the paper; no empirical cost data provided—recommendation based on practitioner experience and theoretical cost accounting.
Traditional automation governance is often ad hoc, underestimates security and compliance risks, and does not scale safely for mission-critical enterprise systems.
Synthesis of industry best practices and practitioner-sourced lessons (qualitative observations and case illustrations). No systematic survey or quantitative incidence rates provided.
Prompt fraud reduces the marginal cost of producing convincing fraudulent artifacts, which may increase fraud frequency and expected losses absent mitigations.
Economic reasoning and conceptual modeling of incentives; no empirical estimates of frequency or losses included.
Lack of prompt provenance, versioning, and validation practices increases organizational exposure to prompt fraud.
Conceptual analysis and recommended controls (provenance/versioning) drawn from audit-framework comparisons and threat modeling.
There is insufficient logging/traceability of prompts, responses, and model versions in many workflows, creating a control weakness for detecting prompt fraud.
Observations from literature/regulatory review and the paper's threat/control mapping; asserted as a common operational gap (no systematic measurement).