Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
Key disadvantages and barriers to the proposed digital modernization are administrative backlogs, rural infrastructure deficits, and qualification fragmentation.
Identified limitations in the paper's diagnostic section; based on conceptual review and sector knowledge rather than quantified barrier assessment.
Rural constraints (limited electricity, limited ICT access, and fewer training centers) reduce inclusion of rural trainees in vocational-to-engineering pathways.
Qualitative discussion and domain knowledge within the paper; no field survey or representative sample quantifying the rural access gap.
Fragmentation and overlap across vocational and technical qualifications create discontinuities that impede career progression.
Conceptual analysis of qualification frameworks and mapping of vocational/technical curricula; no empirical measurement of career outcomes or frequencies of pathway breakdowns.
Administrative irregularities and backlogs exist in SAQA/NATED ratification processes, including suspension or deregistration actions carried out without due process.
Institutional review and diagnostic claims in the paper; assertions drawn from document/process analysis rather than audited data or quantified case series (no sample size provided).
Misalignment between hands-on technical training (artisan-level skills) and formal institutional certification (SAQA/NATED/NCV/SETA) is blocking vocational-to-engineering career progression.
Qualitative institutional review and conceptual systems analysis presented in the paper; no empirical dataset, no sample size, argumentation based on policy/process review and domain knowledge.
The emission‑reduction benefits of IR are larger in provinces with explicit policy support for automation or green development.
Heterogeneity/interaction analysis using indicators of policy support (presence/intensity of targeted policies); results show a stronger negative IR–IWE effect where policy support is stronger.
The emission‑reduction benefits of IR are larger in provinces with deeper financial markets (greater financial depth).
Heterogeneity analysis splitting the sample or interacting IR with a financial depth measure (provincial financial development indicators); stronger negative IR–IWE coefficients found in provinces with higher financial depth.
Trade policy (trade openness) should be modeled as a moderating factor when estimating technology-driven urban outcomes because openness can dampen local price effects of digital trade.
Inference based on the reported negative moderation effect of trade openness on the digital-trade → house-price relationship from interaction regressions.
Greater trade openness weakens (attenuates) the positive effect of digital trade on city-level house prices.
Interaction terms between digital trade and a measure of trade openness in the panel regressions; reported negative moderation effect (exact openness measure and sample details not provided).
Carbon emission efficiency (CEE) partially mediates the relationship between DE and per capita carbon emissions (DE → CEE → PCE).
Mediating-effect (mediation) models applied to the 278-city panel (2011–2022) testing the indirect pathway from DE to PCE through CEE; mediation tests (coefficients and significance levels) indicate a mediating role for CEE.
Policy and regulatory vacuum (data governance, interoperability, safeguards) limits scale and inclusive diffusion of AI in agriculture.
Authors' thematic finding from reviewed literature and institutional reports noting weak policy frameworks and governance gaps.
Limited digital literacy and human capacity among smallholders is a key barrier to adoption and effective use of AI solutions.
Multiple studies and reports in the review documenting low digital literacy, limited extension capacity, and training needs among target users.
Scalable adoption of AI in developing-country agriculture is constrained by infrastructure gaps (connectivity, power, data platforms).
Thematic synthesis across reviewed studies and reports identifying recurring infrastructure constraints limiting deployment and scale-up.
Data governance, privacy, and cybersecurity risks can create negative externalities and raise adoption costs, requiring governance frameworks that affect social welfare outcomes.
Recurring risk themes across reviewed papers (conceptual analyses, case reports) that highlight governance and cybersecurity concerns associated with DT data.
Principal barriers to DT adoption include paper‑based or legacy regulatory/compliance processes that slow digitisation.
Findings from reviewed studies noting regulatory and compliance processes as impediments to digital handover and automated workflows.
Principal barriers to DT adoption include misaligned stakeholder incentives and fragmented project delivery models.
Synthesis of conceptual and case literature describing contractual and incentive misalignments that impede lifecycle data continuity.
Principal barriers to DT adoption include low digital maturity and uneven capabilities across supply chains.
Recurring observations in the literature review about heterogeneous digital skills and maturity across firms in the supply chain.
Principal barriers to DT adoption include data quality and continuity problems at handover.
Thematic synthesis across reviewed literature reporting frequent issues with data quality and handover continuity between project phases.
Principal barriers to DT adoption include interoperability gaps and lack of standards.
Thematic findings from qualitative synthesis of the 160 reviewed studies (recurring theme across conceptual papers, case studies and pilots).
ANN analysis ranks need-for-human-interaction barriers as the most important predictor of GAICS adoption outcome.
ANN feature-importance analysis reported in the paper that ranks predictors for adoption outcome and finds the human-interaction barrier as the top predictor; paper abstract does not include details on ANN implementation or sample characteristics.
Platformization and data moats in digital lending can increase concentration risks: firms with richer data histories gain sustained access to cheaper finance, potentially raising market concentration.
Market structure analysis and conceptual synthesis of two‑sided platform economics applied to fintech; argued via theoretical mechanisms and qualitative observations rather than new empirical measurement of concentration effects.
Contemporary financing alternatives introduce new risks including data/privacy vulnerabilities, regulatory compliance gaps, and lender heterogeneity.
Synthesis of regulatory and institutional context and qualitative assessment of financing models; supported by discussion of practical risks observed in case studies and literature on digital finance governance.
Lowered cost and faster design cycles increase biosecurity and dual‑use concerns, and therefore economic policy should consider regulation, liability, and monitoring.
Paper raises these concerns in 'Externalities, regulation, and biosecurity'; it is a policy recommendation based on reduced barriers to design rather than empirical incidents presented in the text.
High compute requirements favor incumbents with capital and cloud access, increasing barriers to entry and potential for market concentration in biotech AI.
Paper argues this in 'Capital, compute, and concentration', linking compute intensity to entry barriers; no quantitative thresholds or firm‑level data are presented.
Economic value and competitive advantage will concentrate around entities that control large sequence/structure datasets, compute resources, and refined models (platform effects).
Paper states this as a likely market outcome in 'Market structure and value capture' and 'Capital, compute, and concentration' sections; no quantitative market analysis is provided.
Harms from manipulation, harassment, and de‑anonymizing biometric data create negative social externalities (mental health impacts, discrimination); without regulation, platforms may under‑invest in protective measures.
Synthesis of harms and economic externality reasoning from the reviewed studies; claim is theoretical and policy‑oriented rather than empirically quantified in the paper.
Ongoing operational costs for safe multi‑user VR services (model updates, policy tuning, user support, human moderators) raise marginal costs relative to less‑protected services.
Qualitative cost components identified in the literature and by the authors; no empirical cost accounting or per‑unit estimates provided.
Implementing TVR‑Sec requires upfront investments in secure hardware, AI monitoring engines, and moderation infrastructure, increasing entry costs for new VR platforms and favoring incumbents or well‑capitalized entrants.
Authors' economic analysis based on component cost categories identified across the reviewed literature; no quantitative cost estimates provided.
Weak or opaque civil–military interfaces can create hidden demand for capabilities, skew R&D incentives toward secrecy, and reduce competition and efficiency in civilian markets.
Secondary literature on civil–military relations combined with policy analysis; inferential rather than empirically verified within the study.
Progressive use of export controls and differing normative stances on dual‑use technology can disrupt supply chains, affect comparative advantage, and increase costs for multinational suppliers and downstream users.
Analysis of export‑control policies across jurisdictions and theoretical implications discussed in the economics implications section (no quantitative supply‑chain measurement presented).
Pakistan’s weaker governance of military AI may lower immediate compliance burdens for firms but raise reputational and export risks.
Synthesis of Pakistan’s governance documents and civil–military literature, with inferential policy commentary on market and reputational consequences.
Divergent regulatory regimes increase compliance uncertainty for firms and may fragment markets for dual‑use and defence‑adjacent AI goods/services.
Policy commentary drawing on comparative regulatory findings; inference about market effects rather than empirical measurement.
High frictions or opaque consent reduce data supply, raising costs of training models and potentially reducing market competition by advantaging incumbents with richer legacy data.
Economic reasoning and scenario analysis from the workshop; proposed as an implication rather than an empirically tested claim in the workshop summary.
Inadequate consent creates information asymmetries and negative externalities (privacy harms, loss of trust) that can distort demand for AI services.
Theoretical/economic argument presented in the workshop materials and position papers; not supported by a specific empirical study within the workshop summary.
Dynamic behavior of models (continual learning, model updates) changes the meaning of past consent.
Conceptual argument discussed at the workshop and in position papers; no empirical longitudinal analysis presented within the workshop summary.
Decision delegation to AI agents and opaque personalization blur the scope of consent and control.
Theoretical and design-oriented synthesis from interdisciplinary workshop discussions and position papers; no empirical measurement reported.
Existing controls are not user-friendly or empowering.
Qualitative assessment produced during co-design and participatory prototyping at the workshop and position papers; no quantitative usability metrics presented in the summary.
Privacy policies remain hard to understand; transparency alone doesn’t ensure protection.
Workshop synthesis and position papers citing longstanding observations in HCI and privacy research; the workshop did not report a new empirical study measuring comprehension.
Cookie banners and clickwrap routinely violate informed-consent principles.
Claim arises from workshop findings and referenced critiques in position papers and HCI/privacy literature discussed during the workshop; no new empirical counts or sample sizes reported in the workshop summary.
Current privacy-consent mechanisms (cookie banners, dense policies, transparency-only approaches) fail to deliver meaningful user control.
Synthesis from the workshop participants and position papers; based on qualitative critique of existing mechanisms using the Futures Design Toolkit and participatory design discussions. No primary empirical sample or quantitative evaluation reported in the workshop summary.
Biased or unrepresentative AI outputs produce negative externalities, including maladaptation and inefficient investments in vulnerable regions.
Conceptual analysis and illustrative cases linking misleading model outputs to maladaptive decisions; the paper notes risks rather than providing quantified incidence or cost estimates.
Returns to scale in compute and data favor incumbents; without intervention this dynamic can entrench inequality in the global climate-information market.
Economic theory of returns to scale combined with observed compute concentration; no empirical elasticity or returns-to-scale estimates provided.
Concentration of compute and model development creates market power for Northern institutions and companies, likely leading to unequal pricing, control over standards, and capture of high-value climate services.
Descriptive mapping of concentration plus economic analysis of market structure and returns to scale; illustrative rather than quantitatively proven across markets.
Rapid AI adoption without a shift from model-centric to data- and equity-centric development risks producing systematically worse performance and misleading recommendations for the most climate-vulnerable, data-sparse regions.
Synthesis of domain-specific case studies (weather/climate, impact models, LLMs) and conceptual causal tracing demonstrating how infrastructure asymmetry can degrade outputs in vulnerable regions; evidence illustrative rather than causal-estimate based.
Large language models (LLMs) that rely on dominant, textualized climate knowledge tend to foreground Northern epistemologies and marginalize local or indigenous knowledge, reinforcing biases in climate narratives and recommendations.
Case studies and analysis of training-corpus composition and output examples illustrating the dominance of Northern textual sources and examples of sidelining local knowledge; no large-scale audit results provided.
In climate impact modelling, sparse and unrepresentative exposure and vulnerability data combined with inadequate validation generate high uncertainty and risk of misleading interventions and maladaptation in vulnerable locales.
Targeted case studies and literature synthesis showing gaps in exposure/vulnerability datasets and validation failures; argument is illustrated rather than quantified across all systems.
In weather and climate modelling, historically and spatially biased observational data produce systematic performance gaps in under-observed tropical and low-income regions, reducing forecast fidelity where adaptive capacity is lowest.
Comparative, domain-specific case studies and literature review documenting observational data sparsity and illustrative empirical performance gaps; no single cross-system statistical estimate provided.
The geographic concentration of compute and model development creates path dependence: model design, training datasets, and validation reflect Northern priorities and contexts.
Conceptual analysis supported by cross-disciplinary synthesis and illustrative case studies showing dataset selection, validation practices, and model design choices aligned with Northern contexts rather than global representativeness.
At the organizational scale, AI adoption is constrained and shaped by compliance requirements, formal policies, and prevailing norms.
Participants' accounts in workshops (n=15) noting compliance and policy considerations; thematic analysis classified these as organizational-level constraints.
Creators who systematize high-throughput AI workflows or control distribution channels may capture outsized returns, potentially increasing winner-take-most dynamics on platforms.
Theoretical implication extrapolated from observed high-throughput practices and monetization strategies in the 377 videos; not directly measured or quantified in the dataset.