Evidence (4560 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Productivity
Remove filter
Human verification (and automated verification infrastructure) becomes the limiting factor and a scarce complement to AI generation, raising demand and wages for verification expertise and tooling.
Theoretical labor-market analysis and complementarity argument in the paper; no labor market data or econometric estimates provided.
AI contributes to flatter, more networked and modular organizational forms, with increased cross-functional coordination enabled by shared data platforms and real-time analytics.
Conceptual reasoning supported by cross-sector illustrative examples; no standardized cross-firm comparative empirical study reported in the book.
Valuation of AI services should account for initiation assistance (fixed-cost reduction to starting tasks); monetizable value extends beyond direct task automation and could affect pricing/willingness-to-pay models.
Economic argument and implication drawn from the conceptual model; the paper does not provide empirical willingness-to-pay or pricing studies.
Conversational initiation assistance could complement human labor by increasing worker throughput and engagement, rather than directly substituting for skilled tasks.
Economic/managerial speculation in the paper; no empirical workforce or productivity studies presented.
Designing interfaces and metrics that focus only on task completion or execution misses value derived from initiation assistance.
Analytic recommendation based on the proposed model; no empirical metric-validation or A/B test results presented.
Conversational AI provides a distinct, non-executive mode of value — acting as an action-initiation interface in addition to being a task-execution tool.
Conceptual/economic argumentation in the paper; no empirical valuation or willingness-to-pay estimates provided.
Iterative conversation with AI surfaces sub-tasks and structures problems (structuring), creating clearer action plans and reducing initiation barriers.
Conceptual argument and illustrative example; paper does not present systematic coding, task analyses, or empirical tests.
Externalization (expressing frustration/stress to an external interlocutor) reduces affective load and decision paralysis, facilitating task start.
Theoretical reasoning supported by an illustrative anecdote; no empirical measurements or sample-based evidence provided.
Verbalization (talking through a problem with the AI) helps users organize thoughts and identify next steps, thereby lowering barriers to action.
Mechanistic argument in the paper; no experimental or observational data reported to validate the mechanism.
The 'Peripheral Approach' — beginning with casual, low-stakes dialogue (complaints, describing where one is stuck) rather than immediately requesting task execution — gradually reduces initiation friction.
Theoretical argument and illustrative anecdote from the author. No controlled studies or quantitative measures presented.
Casual, conversation-style interactions with AI can reduce psychological barriers that prevent people from starting tasks.
Conceptual/theoretical argumentation in the paper; illustrated by an anecdote (author's use of casual AI conversation to begin drafting the paper). No systematic empirical data, no experiments or observational samples reported.
Model and platform providers may capture significant rents through APIs and integrated developer tooling.
Market-structure analysis and observations of current platform monetization strategies; speculative projection based on platform economics.
Skill premiums may shift toward workers who can effectively collaborate with AI (prompting, verification, security auditing).
Theoretical and early observational studies suggesting complementary skills add value; limited empirical wage/earnings evidence to date.
Computer science curricula should emphasize computational thinking, debugging skills, and verification practices rather than rote coding alone.
Educational implications drawn from studies of learning with LLMs, risks of shallow learning, and expert recommendations; primarily normative and prescriptive rather than experimental proof.
Producing occupation × skill × region OAIES scores with uncertainty intervals and scenario modes (conservative/optimistic adoption) will improve decision‑relevant information for policymakers.
Design specification and intended outputs described in the paper; no user testing or policymaker impact evaluation reported.
When tasks are well matched to GenAI capabilities, firms can raise output per consultant and reduce time-per-task, thereby changing the marginal productivity of labor in consulting.
Inferred in the implications section from interview-based observations and the TGAIF framework; no reported quantitative measurement of output per consultant or time savings in the study.
Dynamic oversight regimes (ongoing audits, continuous certification) are likely more effective than one-time approvals for managing risks from agentic AI.
Policy and governance argument based on the dynamic nature of agentic systems; presented as a recommendation rather than empirically validated.
Firms will place greater value on alignment-as-a-service, monitoring platforms, and certification/assurance products as agentic systems proliferate.
Market-structure and demand reasoning from the paper; proposed as an implication rather than empirically demonstrated.
DAR-capable systems that credibly implement transparent registers and controlled reversibility may face lower adoption frictions in high-stakes sectors, affecting market dynamics and insurer/purchaser willingness to pay.
Economics-oriented implication and conjecture in the paper about adoption dynamics and market effects; not empirically tested in the manuscript.
Demand will increase for complementary goods: orchestration platforms, testing/verification tools, secure code-generation services, and team-level integrations.
Projected market implication based on practitioner-identified frictions (quality, security, integration) in the Netlight study; speculative market prediction without market data.
The need to orchestrate AI ensembles increases demand for skills in system design, AI-tooling, and coordination rather than only coding.
Authors' inference based on observed practitioner emphasis on supervision and integration tasks in the Netlight qualitative study; no labor market data provided.
First-mover and scale advantages are likely for firms that successfully integrate AI with robust oversight, potentially creating durable cost and service-quality advantages.
Theoretical and strategic analyses aggregated in the review; this is inferential and not supported by longitudinal competitive empirical studies within this paper.
Platforms combining high-volume generation with effective filtering/curation can create strong network effects and concentration in markets for AI-assisted ideation.
Market-structure reasoning and illustrative platform examples from the literature; no empirical market-wide causal studies reported in the review.
Firms that embed AI into collaborative workflows and invest in human curation may capture disproportionate returns (first-mover and scale advantages).
Theoretical/strategic argument supported by some applied case evidence and platform-market reasoning cited in the synthesis; the review notes absence of systematic causal firm-level evidence.
Generative AI will create complementarity: increasing returns to skills in evaluation, curation, synthesis, and domain expertise that integrate AI outputs.
Theoretical labor-economics reasoning supported by case studies and task-level studies showing demand for evaluation/curation skills in AI-assisted workflows; direct causal evidence on wage effects is limited in the reviewed literature.
Lowered cost and time of ideation and early-stage R&D due to generative AI may accelerate innovation cycles and reduce firms' search costs.
Inference from studies reporting reduced time-to-prototype and increased ideation; this is an economic interpretation rather than directly measured long-run firm-level innovation rates in the reviewed studies.
Firms must redesign KPIs to capture trust-related externalities (accuracy, escalation rates, repeat contacts) rather than only speed and throughput to avoid perverse incentives.
Recommendation based on observed trade-offs in deployments where emphasis on speed/throughput can harm quality/trust; not supported by randomized tests in the paper.
Transparency about AI use, seamless escalation to humans, and continuous monitoring/feedback loops are essential mitigations to avoid quality failures and trust erosion.
Governance literature, best-practice case studies, and deployment reports recommending transparency and escalation; limited direct causal evidence on mitigation effectiveness.
Firms that successfully integrate trustworthy, accurate AI can achieve faster strategic pivots and potentially gain competitive advantages and higher returns to organizational capital that embeds AI capabilities.
Associations between perceived trust/accuracy and organizational agility indicators in the quantitative analysis, plus qualitative case-like interview evidence suggesting competitive benefits; explicit causal estimates of returns not provided (implication is inferential).
Improved matching from predictive tools can shorten vacancy durations and improve reallocation dynamics in labor markets.
Implication from the review citing reported improvements in candidate screening and matching in some included studies; identified as a mechanism for labor-market effects.
The framework supports innovation via logical modelling and data analysis.
Listed as an advantage: logical modelling and data analysis enable innovation in instructional design. Support is conceptual; no empirical evidence presented.
A standardized governance pattern lowers coordination and compliance costs across business units, potentially increasing adoption and accelerating diffusion of advanced automation.
Theoretical claim supported by case-level practitioner observations and economic reasoning; no empirical diffusion or adoption-rate data provided.
The reference pattern yields benefits including faster, safer scaling of automation across business units, reduced compliance incidents and data-exposure risk, and better accountability and traceability of automated decisions.
Claimed benefits supported by practitioner anecdotes and multi-sector implementation descriptions; no large-sample quantitative estimates or causal inference reported.
Embedding compliance features into automation can reduce regulatory fines and litigation risk, thereby affecting firm risk profiles and cost of capital.
Theoretical implication drawn from aligning governance with compliance objectives; no empirical evidence linking the proposed pattern to reduced fines or changes in cost of capital in the paper.
The framework is applicable across multiple sectors and aligns with industry best practices; it is presented as a deployable pattern rather than a one-size-fits-all product.
Authors' assertion based on multi-sector practitioner examples and alignment with documented industry practices (qualitative). Details on sector coverage and case selection are limited.
The proposed governed hyperautomation pattern yields benefits including faster scaling of automation, reduced operational risk, maintained regulatory compliance, and preserved long-term system integrity.
Claim grounded in conceptual argument and practitioner case-based illustrations; no large-scale quantitative evaluation or causal inference provided in the paper.
Technical mitigations such as prompt/response attestation, watermarking, model output provenance, access controls, differential-design of prompts (few-shot safety), and monitoring tools can help detect or prevent prompt fraud.
Proposed technical controls and rationale derived from threat modeling and prior literature on provenance/watermarking; proposals are not empirically validated in the paper.
Targeted subsidies or support for SMEs to access SECaaS could accelerate secure AI adoption where scale barriers exist.
Economic rationale and proposed field-experiment designs; no empirical trial results presented in the chapter.
Clarifying liability and the shared responsibility model will better align incentives between providers and customers and improve security outcomes.
Policy and legal analysis; case studies of incidents where unclear responsibilities hampered response; recommended as an intervention rather than proven by causal evidence.
Promoting interoperable standards and certification can reduce lock-in and lower search costs for buyers, fostering competition in SECaaS markets.
Policy recommendation grounded in market-design theory and analogies to other standardization efforts; supporting case studies from other technology markets suggested but not empirically established here.
Faster iterative experimental cycles enabled by LLM orchestration may increase returns to experimental R&D and change the optimal allocation between computation, instrumentation, and labor.
Economic argumentation about iterative cycles and returns to capital/labor; proposed rather than empirically demonstrated.
The method can identify frontier topics and cross-field convergence (e.g., methods migrating from NLP to vision) to inform assessments of comparative advantage and specialization across institutions/countries.
Proposed implication: using topic maps and cluster dynamics to detect frontier topics and cross-field migration; no concrete empirical examples or validation presented in summary beyond general mapping claim on ICML/ACL abstracts.
The approach is scalable and model-agnostic: different LLMs and embedding models can be swapped into the pipeline without changing the overall method.
Claimed design property in the paper summary (asserted ability to substitute different LLMs/embedding models). No detailed cross-model robustness experiments or scalability benchmarks provided in the summary.
AI should serve precision and purpose in public policy — improving foresight, enabling better trade-offs, and preserving democratic accountability.
Normative policy prescription and conceptual argumentation in the book; no empirical testing or quantified outcomes reported.
AI-driven systems should empower people with knowledge and pathways to participate in global markets rather than concentrate gains.
Normative recommendation derived from policy analysis and value judgments in the book; not supported by empirical evidence in the blurb.
Firms that effectively implement governed hyperautomation may realize sustainable efficiency and reliability advantages, potentially increasing market concentration in some sectors unless governance costs level the playing field.
Strategic and competitive-dynamics argument derived from case examples and best-practice synthesis; no sector-level empirical concentration measures presented.
Standardized governance patterns reduce information asymmetries, enabling insurers and regulators to better price and manage enterprise AI risks.
Policy implication argued from the existence of standardized governance artifacts (audit trails, certifications) and industry practice; conceptual, no empirical insurer/regulator data presented.
Embedding governance reduces downside risks (compliance fines, data breaches), improving expected net returns of automation investments and lowering the adoption threshold for risk-averse firms.
Conceptual cost-benefit argument and industry best-practice examples; lacking quantitative measurement of returns or threshold shifts.
VIS can be integrated into macro/meso AI-economics models (input–output general equilibrium, growth models) to capture embodied labor and capital effects and to enable counterfactual analysis of AI diffusion scenarios.
Authors propose methodological extensions and modeling directions that embed VIS-style accounting into larger economic models for scenario analysis (conceptual suggestion).
VIS metrics can inform policy decisions (workforce retraining, sectoral subsidies, taxation) by revealing where AI-induced productivity changes will propagate through supply chains.
Authors argue policy relevance based on VIS’s ability to map upstream/downstream labor effects; presented as an implication rather than empirically validated policy outcomes.