Evidence (1902 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Skills Training
Remove filter
Policy and governance should preserve worker agency (participatory design, transparency, clear accountability) and support training and institutional mechanisms (collective bargaining, workplace representation) to negotiate value-sharing from AI productivity gains.
Normative policy recommendation by authors derived from qualitative findings (workshops with 15 UX designers) that highlighted agency and distributional concerns.
Procedural material modeling (Perlin noise) is a promising technique for robust policy learning and can reduce the need for extensive real-world data collection.
Implication stated in the paper's discussion: authors suggest procedural variation via Perlin noise aided robust policy learning and improved sim-to-real transfer; empirical quantification of reduced real data needs is not provided in the summary.
Perception providing the material's location inside the vial was used to guide the agent.
Paper summary states perception input (material location) was provided to the agent; sensing modality and accuracy/details of perception are not specified.
There is a strong complementarity between AI investments and organizational change: firms with better leadership, cross-functional processes, and data practices capture disproportionate benefits, implying increasing returns to scale and potential winner-take-most dynamics.
Authors' theoretical inference from cross-case patterns and economic reasoning; supported qualitatively by cases showing disproportionate gains in better-managed firms.
Firms that can credibly supply explainability and governance may capture a premium—explainability can be a competitive differentiator and a signal of quality and lower regulatory risk.
Conceptual synthesis and market-structure arguments from the reviewed literature; reviewed studies provide theoretical and some qualitative support but not systematic market-price estimates.
Policy should incentivize transparency, auditability, standards for human–AI interfaces, workforce development, certification of teaming practices, and liability frameworks to ensure accountability and equitable outcomes.
Normative recommendation based on ethical and governance considerations synthesized in the paper; not supported by policy evaluation evidence within the paper.
Orchestrating attention and interrogation through interface and workflow design helps manage what humans and AI focus on and how they challenge/verify each other, thereby reducing errors and misuse.
Prescriptive claim grounded in human factors and HCI literature synthesized by the authors; the paper suggests these mechanisms but does not report empirical trials demonstrating effects.
Design principles (define goals/constraints, partition roles, orchestrate attention/interrogation, build knowledge infrastructures, continuous training/evaluation) are necessary design levers to build high-performing, transparent, trustworthy, and equitable Human–AI teams.
Prescriptive synthesis from reviewed literatures and conceptual modeling; these principles are proposed heuristics rather than empirically validated interventions in the paper.
Investments to build trust in AI (transparency, reliability, training) are likely to have positive returns via higher adoption rates and realized AI benefits.
This is presented as an implication derived from observed positive associations between trust and outcomes; the study did not conduct cost–benefit or longitudinal causal tests of such investments in the reported analyses.
Practical levers to increase AI trust include transparency of AI models, demonstrated reliability, and manager-focused AI literacy/training.
Paper proposes these levers based on study findings and discussion (recommendations), but they were not tested experimentally in the reported cross-sectional survey.
A stronger data-driven decision culture that stems from AI trust yields better operational and academic outcomes.
Study reports positive associations between AI trust → data-driven culture → operational and academic outcomes in survey-based analyses; however, the summary does not specify which operational/academic metrics were measured or sample size.
Vendors that embed robust cognitive interlocks into development platforms can command premium pricing by reducing downstream risk; verification features may become a competitive moat.
Market-structure and product-differentiation reasoning in the paper; no market data, pricing studies, or competitive analyses presented.
Human verification (and automated verification infrastructure) becomes the limiting factor and a scarce complement to AI generation, raising demand and wages for verification expertise and tooling.
Theoretical labor-market analysis and complementarity argument in the paper; no labor market data or econometric estimates provided.
AI contributes to flatter, more networked and modular organizational forms, with increased cross-functional coordination enabled by shared data platforms and real-time analytics.
Conceptual reasoning supported by cross-sector illustrative examples; no standardized cross-firm comparative empirical study reported in the book.
Model and platform providers may capture significant rents through APIs and integrated developer tooling.
Market-structure analysis and observations of current platform monetization strategies; speculative projection based on platform economics.
Skill premiums may shift toward workers who can effectively collaborate with AI (prompting, verification, security auditing).
Theoretical and early observational studies suggesting complementary skills add value; limited empirical wage/earnings evidence to date.
Computer science curricula should emphasize computational thinking, debugging skills, and verification practices rather than rote coding alone.
Educational implications drawn from studies of learning with LLMs, risks of shallow learning, and expert recommendations; primarily normative and prescriptive rather than experimental proof.
Producing occupation × skill × region OAIES scores with uncertainty intervals and scenario modes (conservative/optimistic adoption) will improve decision‑relevant information for policymakers.
Design specification and intended outputs described in the paper; no user testing or policymaker impact evaluation reported.
When tasks are well matched to GenAI capabilities, firms can raise output per consultant and reduce time-per-task, thereby changing the marginal productivity of labor in consulting.
Inferred in the implications section from interview-based observations and the TGAIF framework; no reported quantitative measurement of output per consultant or time savings in the study.
Demand will increase for complementary goods: orchestration platforms, testing/verification tools, secure code-generation services, and team-level integrations.
Projected market implication based on practitioner-identified frictions (quality, security, integration) in the Netlight study; speculative market prediction without market data.
The need to orchestrate AI ensembles increases demand for skills in system design, AI-tooling, and coordination rather than only coding.
Authors' inference based on observed practitioner emphasis on supervision and integration tasks in the Netlight qualitative study; no labor market data provided.
Platforms combining high-volume generation with effective filtering/curation can create strong network effects and concentration in markets for AI-assisted ideation.
Market-structure reasoning and illustrative platform examples from the literature; no empirical market-wide causal studies reported in the review.
Firms that embed AI into collaborative workflows and invest in human curation may capture disproportionate returns (first-mover and scale advantages).
Theoretical/strategic argument supported by some applied case evidence and platform-market reasoning cited in the synthesis; the review notes absence of systematic causal firm-level evidence.
Generative AI will create complementarity: increasing returns to skills in evaluation, curation, synthesis, and domain expertise that integrate AI outputs.
Theoretical labor-economics reasoning supported by case studies and task-level studies showing demand for evaluation/curation skills in AI-assisted workflows; direct causal evidence on wage effects is limited in the reviewed literature.
Lowered cost and time of ideation and early-stage R&D due to generative AI may accelerate innovation cycles and reduce firms' search costs.
Inference from studies reporting reduced time-to-prototype and increased ideation; this is an economic interpretation rather than directly measured long-run firm-level innovation rates in the reviewed studies.
Firms that successfully integrate trustworthy, accurate AI can achieve faster strategic pivots and potentially gain competitive advantages and higher returns to organizational capital that embeds AI capabilities.
Associations between perceived trust/accuracy and organizational agility indicators in the quantitative analysis, plus qualitative case-like interview evidence suggesting competitive benefits; explicit causal estimates of returns not provided (implication is inferential).
Improved matching from predictive tools can shorten vacancy durations and improve reallocation dynamics in labor markets.
Implication from the review citing reported improvements in candidate screening and matching in some included studies; identified as a mechanism for labor-market effects.
The framework supports innovation via logical modelling and data analysis.
Listed as an advantage: logical modelling and data analysis enable innovation in instructional design. Support is conceptual; no empirical evidence presented.
Implementing the proposed framework will reduce 'brain waste' by improving recognition and cross-border mobility of DRC-trained technical personnel.
Theoretical claim supported by operations-research logic and labor-market allocation arguments in the paper; no empirical causal evaluation, sample, or longitudinal labor-market outcome data provided.
AI should serve precision and purpose in public policy — improving foresight, enabling better trade-offs, and preserving democratic accountability.
Normative policy prescription and conceptual argumentation in the book; no empirical testing or quantified outcomes reported.
AI-driven systems should empower people with knowledge and pathways to participate in global markets rather than concentrate gains.
Normative recommendation derived from policy analysis and value judgments in the book; not supported by empirical evidence in the blurb.
Incentives for human‑augmenting AI (e.g., subsidies or tax incentives tied to task redesign and training) can promote inclusive adoption patterns.
Policy analysis and comparative case studies; theoretical models that predict firm adoption responses to incentives, but limited causal empirical evidence specific to AI-targeted incentives.
Authors propose the 'AI orchestra' concept: future development will involve coordinated ensembles of specialized AI agents (code generation, test generation, dependency analysis, security scanning) orchestrated by humans and higher-level controllers.
Theoretical/conceptual argument by the authors grounded in qualitative findings from Netlight (practitioner reports of multiple tools and coordination frictions); this is a forward-looking synthesis rather than an empirically established fact.
Platform design choices (property rights, portability, reputation, tokenization, escrowed memories) will shape incentives for contributions to shared knowledge and agent improvement.
Policy and mechanism-design implications drawn from observed phenomena (shared memories, contributions, and trust) in the qualitative dataset; recommendation rather than empirically tested claim.
Shared memory architectures create public-good–like externalities (knowledge diffusion and spillovers) that may be underprovided absent coordination or platform governance.
Qualitative observations of shared memories and diffusion patterns plus theoretical economic interpretation; no empirical quantification of spillover magnitudes provided.
Research and measurement priorities include monitoring substitution versus complementarity effects of AI on wages and hours across occupations, improving data on informal work and real-time skill demand, and evaluating effectiveness of training modalities in the Albanian context.
Stated research agenda in the paper motivated by observed limitations and gaps (correlational evidence, measurement gaps, policy uncertainty); these are recommendations rather than empirical findings.
Organizational heterogeneity in strategic backing and mentoring explains variation in benefits from AI adoption across firms and sectors, contributing to cross-firm productivity dispersion.
Theoretical claim linking organizational moderators to heterogeneous adoption outcomes; proposed as an empirical research direction without data provided.
Managerial and peer mentoring styles (e.g., directive vs. developmental mentoring) influence how affordances are perceived and actualized, affecting learning, trust, and task allocation in human–AI collaboration.
Theoretical argument drawing on mentoring and organizational behavior literatures integrated with AST/AAT; no empirical tests or sample presented.
Continuous learning capabilities imply ongoing maintenance/data costs but can lower long-run performance degradation and retraining expenses.
Analytic implication derived from system design (continuous model updating) and standard ML maintenance considerations; not empirically quantified in the paper.
Policy adaptation, workforce reskilling, and AI governance frameworks will determine whether GenAI's long-term impact is inclusive or inequality-enhancing.
Normative conclusion in the paper based on reviewed empirical findings and policy literature (predictive/speculative; no empirical test provided in excerpt).
Women in Ireland use advanced digital skills at rates broadly comparable to women elsewhere in Europe; Ireland's large gender gap instead reflects particularly high rates of advanced digital task use among men.
Cross-country comparison of female rates of advanced digital task use in ESJS descriptive tables; comparison highlights that Irish female rates are similar to European female averages while Irish male rates are unusually high.
Differences in observable worker and job characteristics (education, field of study, occupation, sector) explain only a minority of the Europe-wide gender gap in advanced digital task use, accounting for around 30% on average.
Decomposition analysis (e.g., Oaxaca–Blinder style) applied to ESJS data to partition the gender gap into explained (observable characteristics) and unexplained components. (Exact sample sizes by subgroup not reported in excerpt.)
Lower barriers to producing design concepts with GenAI could enable more freelancing and entry by non-traditional providers, altering market structure and intensifying competition at the lower end of the value chain.
Speculative implication extrapolated from interview findings and economic reasoning in the paper; not empirically tested within the study.
Demand for designers will likely shift toward individuals combining domain expertise with algorithmic/AI fluency (prompting strategies, tool orchestration), potentially increasing returns to these hybrid skills.
Inference and implication drawn from interview themes about algorithmic thinking and authors' policy/economics discussion; not empirically tested in study.
Reported pilot gains, if scaled, could shift firm‑level returns and industry productivity measures, but gains are contingent on coordinated adoption; uneven uptake may produce winner‑takes‑more dynamics among technologically advanced firms.
Inference from pilot results and economic reasoning in the reviewed literature; no large‑scale empirical validation provided in the review.
Adoption heterogeneity may widen productivity dispersion across firms and contribute to market concentration, since organizations with better data, processes, and training budgets will capture more benefit.
Economic interpretation of literature and survey findings; speculative projection rather than empirical measurement within the study.
Governance, regulatory capacity, and labor market institutions will determine whether AI embodied in foreign investment translates into technology transfer, local capability building, and decent jobs.
Policy implication based on the review's repeated finding that institutional quality and labor regulation mediate FDI spillovers; specific empirical work on AI mediation is recommended but not yet available.
Foreign investors are potential major vectors of AI and digital technology transfer; the sectoral pattern of FDI will influence whether AI adoption leads to inclusive productivity gains or concentrated skill‑biased displacement.
Forward‑looking implication drawn from synthesis of FDI-to-technology transfer literature; no new empirical evidence on AI specifically in SSA provided in the review (authors call for empirical studies).
Demand for mid-level, routine-focused developer roles could compress while demand rises for verification, security, and AI–human orchestration skills.
Theoretical task-replacement argument based on observed capabilities of LLMs and synthesized user study evidence; limited direct labor-market empirical evidence in the reviewed literature.
Routine coding tasks may be partially automated, shifting human labor toward verification, integration, architecture, and domain-specific tasks.
Task-composition studies, user studies showing LLMs handle boilerplate/routine work, and economic inference synthesized across studies.