Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Scalable adoption of AI in developing-country agriculture is constrained by infrastructure gaps (connectivity, power, data platforms).
Thematic synthesis across reviewed studies and reports identifying recurring infrastructure constraints limiting deployment and scale-up.
Data governance, privacy, and cybersecurity risks can create negative externalities and raise adoption costs, requiring governance frameworks that affect social welfare outcomes.
Recurring risk themes across reviewed papers (conceptual analyses, case reports) that highlight governance and cybersecurity concerns associated with DT data.
Principal barriers to DT adoption include paper‑based or legacy regulatory/compliance processes that slow digitisation.
Findings from reviewed studies noting regulatory and compliance processes as impediments to digital handover and automated workflows.
Principal barriers to DT adoption include misaligned stakeholder incentives and fragmented project delivery models.
Synthesis of conceptual and case literature describing contractual and incentive misalignments that impede lifecycle data continuity.
Principal barriers to DT adoption include low digital maturity and uneven capabilities across supply chains.
Recurring observations in the literature review about heterogeneous digital skills and maturity across firms in the supply chain.
Principal barriers to DT adoption include data quality and continuity problems at handover.
Thematic synthesis across reviewed literature reporting frequent issues with data quality and handover continuity between project phases.
Principal barriers to DT adoption include interoperability gaps and lack of standards.
Thematic findings from qualitative synthesis of the 160 reviewed studies (recurring theme across conceptual papers, case studies and pilots).
Platformization and data moats in digital lending can increase concentration risks: firms with richer data histories gain sustained access to cheaper finance, potentially raising market concentration.
Market structure analysis and conceptual synthesis of two‑sided platform economics applied to fintech; argued via theoretical mechanisms and qualitative observations rather than new empirical measurement of concentration effects.
Contemporary financing alternatives introduce new risks including data/privacy vulnerabilities, regulatory compliance gaps, and lender heterogeneity.
Synthesis of regulatory and institutional context and qualitative assessment of financing models; supported by discussion of practical risks observed in case studies and literature on digital finance governance.
Lowered cost and faster design cycles increase biosecurity and dual‑use concerns, and therefore economic policy should consider regulation, liability, and monitoring.
Paper raises these concerns in 'Externalities, regulation, and biosecurity'; it is a policy recommendation based on reduced barriers to design rather than empirical incidents presented in the text.
High compute requirements favor incumbents with capital and cloud access, increasing barriers to entry and potential for market concentration in biotech AI.
Paper argues this in 'Capital, compute, and concentration', linking compute intensity to entry barriers; no quantitative thresholds or firm‑level data are presented.
Economic value and competitive advantage will concentrate around entities that control large sequence/structure datasets, compute resources, and refined models (platform effects).
Paper states this as a likely market outcome in 'Market structure and value capture' and 'Capital, compute, and concentration' sections; no quantitative market analysis is provided.
Unequal access to high-quality AI tools creates demand-side market failures and vendor concentration risks, justifying public intervention (subsidies, procurement tied to privacy/audit requirements).
Economic reasoning supported by literature on market failures and vendor dynamics; policy recommendations drawn from comparative analysis. No empirical market-share data provided.
Traditional signals (test scores, credentials) may lose reliability as AI assistance becomes widespread, which will alter estimates of skill endowments and returns to education.
Conceptual economic analysis and literature synthesis arguing how AI augmentation can change signaling and measurement; no empirical quantification presented in the paper.
Teachers currently lack sufficient preparedness (training, time, tools) to integrate AI into formative assessment and to interpret AI-informed evidence; addressing this is necessary for successful transition.
Review of education policy documents, literature on teacher professional development, and comparative case descriptions highlighting teacher-focused policies; no primary survey data reported.
Unequal access to AI amplifies existing achievement gaps and biases assessment outcomes, making equity a primary concern for AI-compatible assessment.
Conceptual and economic analysis drawing on literature about digital divides and policy documents; illustrated through comparative country cases showing variation in access and resources.
AI changes the production of student work (e.g., generative content, altered authorship), undermining traditional notions of student-authored artifacts used in assessment.
Conceptual analysis plus secondary literature on generative AI usage in education and observed capabilities of tools; case studies reference policy responses but no primary measurement of prevalence.
Standardized summative tests were designed for an environment without routine, external AI assistance; those design assumptions are breaking down.
Literature review and synthesis of assessment frameworks contrasted with descriptions of contemporary AI capabilities; conceptual argument rather than empirical test.
Conventional standardized, summative assessment is becoming increasingly misaligned with classroom reality because widespread student access to AI tools changes what, how, and where learning occurs.
Conceptual and policy analysis drawing on established assessment theory and literature on educational technology and AI; supported by comparative case studies of four countries using publicly available policy texts and secondary literature. No primary empirical/causal data or sample size reported.
Harms from manipulation, harassment, and de‑anonymizing biometric data create negative social externalities (mental health impacts, discrimination); without regulation, platforms may under‑invest in protective measures.
Synthesis of harms and economic externality reasoning from the reviewed studies; claim is theoretical and policy‑oriented rather than empirically quantified in the paper.
Ongoing operational costs for safe multi‑user VR services (model updates, policy tuning, user support, human moderators) raise marginal costs relative to less‑protected services.
Qualitative cost components identified in the literature and by the authors; no empirical cost accounting or per‑unit estimates provided.
Implementing TVR‑Sec requires upfront investments in secure hardware, AI monitoring engines, and moderation infrastructure, increasing entry costs for new VR platforms and favoring incumbents or well‑capitalized entrants.
Authors' economic analysis based on component cost categories identified across the reviewed literature; no quantitative cost estimates provided.
Unclear or overlapping rules can shift firm strategies toward risk-averse designs, limiting experimentation with novel AI features and product-market fit iterations.
Scenario analysis and qualitative reasoning about firm strategic responses to regulatory uncertainty; no firm-level behavioral data presented.
Higher compliance costs and enforcement uncertainty may favor large incumbents able to absorb costs, reducing entry by startups and lowering competitive pressure.
Qualitative assessment and comparative reasoning about firm size and cost absorption capacity; no quantitative market entry data included.
Regulatory ambiguity raises expected compliance risk and can depress private investment in AI capabilities deployed via platforms.
Scenario/impact reasoning based on economic theory of risk and investment; qualitative policy analysis without empirical investment measures.
Divergent EU approaches influence global regulatory standards and could create cross-border frictions for multinational platforms.
Qualitative policy analysis and scenario reasoning on international spillovers; no empirical cross-border trade or compliance data provided.
Monitoring AI-specific harms (e.g., algorithmic amplification, recommendation systems) requires specialized capabilities that existing enforcement bodies may lack.
Governance and enforcement capability analysis; qualitative assessment of institutional capacity gaps.
Ambiguity increases compliance costs for platforms and AI developers; smaller firms may be disproportionately affected, altering market structure.
Qualitative assessment and scenario impact reasoning (no empirical cost estimates provided).
Without explicit alignment mechanisms, gaps may persist (or new ones appear) between platform rules, sectoral AI requirements, and data governance regimes.
Comparative mapping of existing frameworks and scenario analysis highlighting alignment gaps; qualitative assessment.
Effective implementation will require clear division of responsibilities among EU bodies and national authorities; weak coordination risks inconsistent enforcement and regulatory arbitrage.
Governance analysis and qualitative assessment based on institutional structure of EU and member-state authorities; scenario reasoning (no primary quantitative data).
Weak or opaque civil–military interfaces can create hidden demand for capabilities, skew R&D incentives toward secrecy, and reduce competition and efficiency in civilian markets.
Secondary literature on civil–military relations combined with policy analysis; inferential rather than empirically verified within the study.
Progressive use of export controls and differing normative stances on dual‑use technology can disrupt supply chains, affect comparative advantage, and increase costs for multinational suppliers and downstream users.
Analysis of export‑control policies across jurisdictions and theoretical implications discussed in the economics implications section (no quantitative supply‑chain measurement presented).
Pakistan’s weaker governance of military AI may lower immediate compliance burdens for firms but raise reputational and export risks.
Synthesis of Pakistan’s governance documents and civil–military literature, with inferential policy commentary on market and reputational consequences.
Divergent regulatory regimes increase compliance uncertainty for firms and may fragment markets for dual‑use and defence‑adjacent AI goods/services.
Policy commentary drawing on comparative regulatory findings; inference about market effects rather than empirical measurement.
High frictions or opaque consent reduce data supply, raising costs of training models and potentially reducing market competition by advantaging incumbents with richer legacy data.
Economic reasoning and scenario analysis from the workshop; proposed as an implication rather than an empirically tested claim in the workshop summary.
Inadequate consent creates information asymmetries and negative externalities (privacy harms, loss of trust) that can distort demand for AI services.
Theoretical/economic argument presented in the workshop materials and position papers; not supported by a specific empirical study within the workshop summary.
Dynamic behavior of models (continual learning, model updates) changes the meaning of past consent.
Conceptual argument discussed at the workshop and in position papers; no empirical longitudinal analysis presented within the workshop summary.
Decision delegation to AI agents and opaque personalization blur the scope of consent and control.
Theoretical and design-oriented synthesis from interdisciplinary workshop discussions and position papers; no empirical measurement reported.
Existing controls are not user-friendly or empowering.
Qualitative assessment produced during co-design and participatory prototyping at the workshop and position papers; no quantitative usability metrics presented in the summary.
Privacy policies remain hard to understand; transparency alone doesn’t ensure protection.
Workshop synthesis and position papers citing longstanding observations in HCI and privacy research; the workshop did not report a new empirical study measuring comprehension.
Cookie banners and clickwrap routinely violate informed-consent principles.
Claim arises from workshop findings and referenced critiques in position papers and HCI/privacy literature discussed during the workshop; no new empirical counts or sample sizes reported in the workshop summary.
Current privacy-consent mechanisms (cookie banners, dense policies, transparency-only approaches) fail to deliver meaningful user control.
Synthesis from the workshop participants and position papers; based on qualitative critique of existing mechanisms using the Futures Design Toolkit and participatory design discussions. No primary empirical sample or quantitative evaluation reported in the workshop summary.
Biased or unrepresentative AI outputs produce negative externalities, including maladaptation and inefficient investments in vulnerable regions.
Conceptual analysis and illustrative cases linking misleading model outputs to maladaptive decisions; the paper notes risks rather than providing quantified incidence or cost estimates.
Returns to scale in compute and data favor incumbents; without intervention this dynamic can entrench inequality in the global climate-information market.
Economic theory of returns to scale combined with observed compute concentration; no empirical elasticity or returns-to-scale estimates provided.
Concentration of compute and model development creates market power for Northern institutions and companies, likely leading to unequal pricing, control over standards, and capture of high-value climate services.
Descriptive mapping of concentration plus economic analysis of market structure and returns to scale; illustrative rather than quantitatively proven across markets.
Rapid AI adoption without a shift from model-centric to data- and equity-centric development risks producing systematically worse performance and misleading recommendations for the most climate-vulnerable, data-sparse regions.
Synthesis of domain-specific case studies (weather/climate, impact models, LLMs) and conceptual causal tracing demonstrating how infrastructure asymmetry can degrade outputs in vulnerable regions; evidence illustrative rather than causal-estimate based.
Large language models (LLMs) that rely on dominant, textualized climate knowledge tend to foreground Northern epistemologies and marginalize local or indigenous knowledge, reinforcing biases in climate narratives and recommendations.
Case studies and analysis of training-corpus composition and output examples illustrating the dominance of Northern textual sources and examples of sidelining local knowledge; no large-scale audit results provided.
In climate impact modelling, sparse and unrepresentative exposure and vulnerability data combined with inadequate validation generate high uncertainty and risk of misleading interventions and maladaptation in vulnerable locales.
Targeted case studies and literature synthesis showing gaps in exposure/vulnerability datasets and validation failures; argument is illustrated rather than quantified across all systems.
In weather and climate modelling, historically and spatially biased observational data produce systematic performance gaps in under-observed tropical and low-income regions, reducing forecast fidelity where adaptive capacity is lowest.
Comparative, domain-specific case studies and literature review documenting observational data sparsity and illustrative empirical performance gaps; no single cross-system statistical estimate provided.
The geographic concentration of compute and model development creates path dependence: model design, training datasets, and validation reflect Northern priorities and contexts.
Conceptual analysis supported by cross-disciplinary synthesis and illustrative case studies showing dataset selection, validation practices, and model design choices aligned with Northern contexts rather than global representativeness.