Evidence (5877 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Governance
Remove filter
Firms that can credibly supply explainability and governance may capture a premium—explainability can be a competitive differentiator and a signal of quality and lower regulatory risk.
Conceptual synthesis and market-structure arguments from the reviewed literature; reviewed studies provide theoretical and some qualitative support but not systematic market-price estimates.
Policy should incentivize transparency, auditability, standards for human–AI interfaces, workforce development, certification of teaming practices, and liability frameworks to ensure accountability and equitable outcomes.
Normative recommendation based on ethical and governance considerations synthesized in the paper; not supported by policy evaluation evidence within the paper.
Orchestrating attention and interrogation through interface and workflow design helps manage what humans and AI focus on and how they challenge/verify each other, thereby reducing errors and misuse.
Prescriptive claim grounded in human factors and HCI literature synthesized by the authors; the paper suggests these mechanisms but does not report empirical trials demonstrating effects.
Design principles (define goals/constraints, partition roles, orchestrate attention/interrogation, build knowledge infrastructures, continuous training/evaluation) are necessary design levers to build high-performing, transparent, trustworthy, and equitable Human–AI teams.
Prescriptive synthesis from reviewed literatures and conceptual modeling; these principles are proposed heuristics rather than empirically validated interventions in the paper.
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
Risk management can accelerate AI adoption by lowering uncertainty for managers and investors, thereby affecting diffusion and productivity gains from AI.
Conceptual implication derived from the review's synthesis and discussion (policy/implication section); not supported by primary empirical testing within the reviewed literature.
Firms that adopt structured risk management for AI projects can reduce model failure, operational losses, and reputational costs—improving risk-adjusted returns on AI investment.
Theoretical and practical extrapolation from general RM frameworks and thematic findings in the literature; no AI-specific primary empirical studies included in the review.
Structured risk management can produce potential cost savings via reduced loss events and more efficient capital allocation.
Reported as a benefit across some reviewed studies and practitioner reports; the review notes lack of primary empirical quantification of effect sizes.
Firms that design processes to preserve human diversity and elicit diverse AI outputs may capture greater productivity gains, increasing returns to organizational capability rather than to raw model access.
Theoretical implication and prescriptive recommendation based on observed homogenization; no direct causal firm-level evidence presented, inference based on economic reasoning.
Procurement contracts for AI systems can require staged validation (pilot, local fine-tuning) and performance-linked payments to align incentives and reduce adoption risk.
Policy recommendation drawn from procurement and incentive-design literature synthesized in the review; not an empirical claim about observed outcomes but a proposed intervention to mitigate identified risks.
Clear regulatory standards for synthetic data quality, provenance, and acceptable validation pipelines will lower transaction costs, reduce liability risk, and stimulate private-sector offerings (synthetic-data services, marketplaces).
Policy and governance analyses in the review arguing that regulatory clarity reduces uncertainty and promotes market activity; this is a policy inference supported by comparative regulatory studies rather than direct causal empirical proof specific to African markets.
AI contributes to flatter, more networked and modular organizational forms, with increased cross-functional coordination enabled by shared data platforms and real-time analytics.
Conceptual reasoning supported by cross-sector illustrative examples; no standardized cross-firm comparative empirical study reported in the book.
Model and platform providers may capture significant rents through APIs and integrated developer tooling.
Market-structure analysis and observations of current platform monetization strategies; speculative projection based on platform economics.
Skill premiums may shift toward workers who can effectively collaborate with AI (prompting, verification, security auditing).
Theoretical and early observational studies suggesting complementary skills add value; limited empirical wage/earnings evidence to date.
Computer science curricula should emphasize computational thinking, debugging skills, and verification practices rather than rote coding alone.
Educational implications drawn from studies of learning with LLMs, risks of shallow learning, and expert recommendations; primarily normative and prescriptive rather than experimental proof.
White-box audits (inspecting model internals, logs, provenance) can detect evasion and recalibrate norms when triggered by anomalies or high-value activity.
Proposed legal and technical audit procedures discussed in the paper; authors do not present audit results or case studies.
Norm-based tax rates derived from observable usage characteristics can reduce gaming and simplify compliance.
Normative argument and proposal in the paper recommending standardized tax schedules; no empirical evaluation or calibration.
Dynamic oversight regimes (ongoing audits, continuous certification) are likely more effective than one-time approvals for managing risks from agentic AI.
Policy and governance argument based on the dynamic nature of agentic systems; presented as a recommendation rather than empirically validated.
Firms will place greater value on alignment-as-a-service, monitoring platforms, and certification/assurance products as agentic systems proliferate.
Market-structure and demand reasoning from the paper; proposed as an implication rather than empirically demonstrated.
DAR-capable systems that credibly implement transparent registers and controlled reversibility may face lower adoption frictions in high-stakes sectors, affecting market dynamics and insurer/purchaser willingness to pay.
Economics-oriented implication and conjecture in the paper about adoption dynamics and market effects; not empirically tested in the manuscript.
Demand will increase for complementary goods: orchestration platforms, testing/verification tools, secure code-generation services, and team-level integrations.
Projected market implication based on practitioner-identified frictions (quality, security, integration) in the Netlight study; speculative market prediction without market data.
The need to orchestrate AI ensembles increases demand for skills in system design, AI-tooling, and coordination rather than only coding.
Authors' inference based on observed practitioner emphasis on supervision and integration tasks in the Netlight qualitative study; no labor market data provided.
First-mover and scale advantages are likely for firms that successfully integrate AI with robust oversight, potentially creating durable cost and service-quality advantages.
Theoretical and strategic analyses aggregated in the review; this is inferential and not supported by longitudinal competitive empirical studies within this paper.
Platforms combining high-volume generation with effective filtering/curation can create strong network effects and concentration in markets for AI-assisted ideation.
Market-structure reasoning and illustrative platform examples from the literature; no empirical market-wide causal studies reported in the review.
Firms that embed AI into collaborative workflows and invest in human curation may capture disproportionate returns (first-mover and scale advantages).
Theoretical/strategic argument supported by some applied case evidence and platform-market reasoning cited in the synthesis; the review notes absence of systematic causal firm-level evidence.
Generative AI will create complementarity: increasing returns to skills in evaluation, curation, synthesis, and domain expertise that integrate AI outputs.
Theoretical labor-economics reasoning supported by case studies and task-level studies showing demand for evaluation/curation skills in AI-assisted workflows; direct causal evidence on wage effects is limited in the reviewed literature.
Lowered cost and time of ideation and early-stage R&D due to generative AI may accelerate innovation cycles and reduce firms' search costs.
Inference from studies reporting reduced time-to-prototype and increased ideation; this is an economic interpretation rather than directly measured long-run firm-level innovation rates in the reviewed studies.
Firms must redesign KPIs to capture trust-related externalities (accuracy, escalation rates, repeat contacts) rather than only speed and throughput to avoid perverse incentives.
Recommendation based on observed trade-offs in deployments where emphasis on speed/throughput can harm quality/trust; not supported by randomized tests in the paper.
Transparency about AI use, seamless escalation to humans, and continuous monitoring/feedback loops are essential mitigations to avoid quality failures and trust erosion.
Governance literature, best-practice case studies, and deployment reports recommending transparency and escalation; limited direct causal evidence on mitigation effectiveness.
Firms that successfully integrate trustworthy, accurate AI can achieve faster strategic pivots and potentially gain competitive advantages and higher returns to organizational capital that embeds AI capabilities.
Associations between perceived trust/accuracy and organizational agility indicators in the quantitative analysis, plus qualitative case-like interview evidence suggesting competitive benefits; explicit causal estimates of returns not provided (implication is inferential).
Improved matching from predictive tools can shorten vacancy durations and improve reallocation dynamics in labor markets.
Implication from the review citing reported improvements in candidate screening and matching in some included studies; identified as a mechanism for labor-market effects.
The framework supports innovation via logical modelling and data analysis.
Listed as an advantage: logical modelling and data analysis enable innovation in instructional design. Support is conceptual; no empirical evidence presented.
Implementing the proposed framework will reduce 'brain waste' by improving recognition and cross-border mobility of DRC-trained technical personnel.
Theoretical claim supported by operations-research logic and labor-market allocation arguments in the paper; no empirical causal evaluation, sample, or longitudinal labor-market outcome data provided.
A standardized governance pattern lowers coordination and compliance costs across business units, potentially increasing adoption and accelerating diffusion of advanced automation.
Theoretical claim supported by case-level practitioner observations and economic reasoning; no empirical diffusion or adoption-rate data provided.
The reference pattern yields benefits including faster, safer scaling of automation across business units, reduced compliance incidents and data-exposure risk, and better accountability and traceability of automated decisions.
Claimed benefits supported by practitioner anecdotes and multi-sector implementation descriptions; no large-sample quantitative estimates or causal inference reported.
Embedding compliance features into automation can reduce regulatory fines and litigation risk, thereby affecting firm risk profiles and cost of capital.
Theoretical implication drawn from aligning governance with compliance objectives; no empirical evidence linking the proposed pattern to reduced fines or changes in cost of capital in the paper.
The framework is applicable across multiple sectors and aligns with industry best practices; it is presented as a deployable pattern rather than a one-size-fits-all product.
Authors' assertion based on multi-sector practitioner examples and alignment with documented industry practices (qualitative). Details on sector coverage and case selection are limited.
The proposed governed hyperautomation pattern yields benefits including faster scaling of automation, reduced operational risk, maintained regulatory compliance, and preserved long-term system integrity.
Claim grounded in conceptual argument and practitioner case-based illustrations; no large-scale quantitative evaluation or causal inference provided in the paper.
Technical mitigations such as prompt/response attestation, watermarking, model output provenance, access controls, differential-design of prompts (few-shot safety), and monitoring tools can help detect or prevent prompt fraud.
Proposed technical controls and rationale derived from threat modeling and prior literature on provenance/watermarking; proposals are not empirically validated in the paper.
Targeted subsidies or support for SMEs to access SECaaS could accelerate secure AI adoption where scale barriers exist.
Economic rationale and proposed field-experiment designs; no empirical trial results presented in the chapter.
Clarifying liability and the shared responsibility model will better align incentives between providers and customers and improve security outcomes.
Policy and legal analysis; case studies of incidents where unclear responsibilities hampered response; recommended as an intervention rather than proven by causal evidence.
Promoting interoperable standards and certification can reduce lock-in and lower search costs for buyers, fostering competition in SECaaS markets.
Policy recommendation grounded in market-design theory and analogies to other standardization efforts; supporting case studies from other technology markets suggested but not empirically established here.
Demand would grow for liability insurance tailored to EdTech, third‑party audits, fairness certifications, and specialized legal advisory services; these markets would affect costs and differential competitiveness.
Predictive market analysis and policy reasoning (no survey or market data presented).
Stricter legal exposure may slow some risky experimentation but encourage investment in fairness testing, robust evaluation, and explainability tools — potentially increasing the quality and trustworthiness of deployed AI in education.
Normative economic argumentation about incentives for R&D and testing; no empirical measurement of innovation rates provided.
Faster iterative experimental cycles enabled by LLM orchestration may increase returns to experimental R&D and change the optimal allocation between computation, instrumentation, and labor.
Economic argumentation about iterative cycles and returns to capital/labor; proposed rather than empirically demonstrated.
The paper provides an initial mapping from diagnosis to intervention strategies (therapeutics) — i.e., treatment planning for model dysfunctions.
Conceptual mapping and proposed intervention strategies documented in the therapeutics section (initial mappings; not claimed as exhaustive).
AI should serve precision and purpose in public policy — improving foresight, enabling better trade-offs, and preserving democratic accountability.
Normative policy prescription and conceptual argumentation in the book; no empirical testing or quantified outcomes reported.
AI-driven systems should empower people with knowledge and pathways to participate in global markets rather than concentrate gains.
Normative recommendation derived from policy analysis and value judgments in the book; not supported by empirical evidence in the blurb.
Algorithmic transparency and auditability can reduce systemic risk from opaque automated lending decisions and improve regulator oversight and macroprudential policy.
Conceptual/systemic-risk argument in the "Systemic risk & governance externalities" section; no empirical systemic-risk analysis provided.
Improved algorithmic transparency could reduce information asymmetries, lowering adverse selection and moral hazard over time and potentially expanding credit to underserved populations.
Conceptual economic argument in the "Credit allocation & pricing" section; based on theory rather than empirical testing.