Evidence (2469 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Org Design
Remove filter
Corpus-derived feedback becomes most useful only when the retrieval pipeline already supplies strong candidate documents from a high-quality first-stage retriever.
Experiments that varied first-stage retriever strength and compared corpus-derived vs. LLM-generated feedback on retrieval performance across the 13 BEIR tasks.
Co-design across hardware, middleware, and applications accelerates downstream algorithmic innovation; fragmentation across ad hoc integrations slows adoption.
Conceptual argument and analogy to co-design benefits in classical HPC and systems engineering; no empirical evidence within QCSC context.
Cloud providers or specialized QCSC service providers could capture market share by offering access, leading to platform markets and network effects (data, software ecosystems, calibrated middleware).
Economic reasoning and analogy to cloud/platform dynamics; discussion of bundling QPU/GPU/CPU access and middleware ecosystems; no empirical adoption data.
Effective ISP depends on high-quality internal data and sometimes external data sharing across partners, raising issues around data ownership, incentives to share, and the design of contracting/market mechanisms to internalize coordination gains.
Case evidence on importance of data quality and authors' policy/contractual discussion; conceptual argument informed by interviews about data-sharing frictions.
ISP automation shifts labor demand toward higher-skill roles (data governance, analytics, cross-functional coordination) and reduces demand for routine forecasting and manual reconciliation tasks.
Interview reports and authors' task-based inference across cases, supplemented by economic reasoning about task reallocation.
ISP is relevant across multiple sectors (FMCG, manufacturing, retail) but outcomes and capabilities are heterogeneous by firm size and legacy IT footprint.
Sample composition includes firms from FMCG, manufacturing, and retail; authors report cross-case heterogeneity linked to firm characteristics and IT legacy.
Technology alone is insufficient; successful ISP requires cross-functional collaboration and continuous process improvement to realize gains from digital integration.
Cross-case interview evidence showing cases where digital tools did not produce expected benefits until processes and collaboration were changed; authors' synthesis of recurring barriers and enablers across the five cases.
Integrated Supply Planning (ISP) improves resilience and competitive performance only when advanced technologies (notably AI-enabled forecasting and ERP integration) are combined with organizational alignment, leadership commitment, and a data-driven culture.
Qualitative multi-case study (n = 5 medium-to-large organizations across FMCG, manufacturing, retail); cross-case comparison of semi-structured interviews with supply chain professionals reporting instances where technology adoption produced gains only alongside organizational enablers.
Standardized explainability requirements (audits, disclosure mandates) will affect market entry, favor incumbents with resources to meet standards, and create demand for third-party auditors and certification services.
Policy- and regulatory-focused literature synthesized in the review; claims are deductive implications from governance proposals and descriptive accounts rather than empirical causal tests.
Implementing explainability increases upfront development costs (tooling, documentation, UIs, training) and ongoing compliance/monitoring costs, but can lower downstream costs from litigation, audits, and reputational harm.
Synthesis of economic and policy literature in the review describing cost components and trade-offs; statements are conceptual and based on reviewed case studies and analyses rather than primary cost accounting.
Firm returns to AI adoption depend crucially on sociotechnical investments (training, redesign, knowledge infrastructure), so AI price/performance alone is an incomplete predictor of adoption returns.
Conceptual claim grounded in organizational literature synthesized in the paper; no firm-level econometric evidence presented within the paper itself.
Economic models of AI impact should move beyond simple task-automation/substitution frameworks to incorporate team-level complementarities and cognitive-process primitives (reasoning, memory, attention).
Theoretical recommendation for economists based on the paper's framework; supported by conceptual arguments rather than empirical re-specification or estimation shown in the paper.
Sociotechnical determinants — team composition, trust calibration, shared mental models, training regimes, and task structure — materially shape Human–AI team effectiveness beyond algorithmic performance alone.
Integrative review of multiple literatures (organizational behavior, human–computer interaction, psychology); presented as conceptual determinants; no empirical quantification provided in the paper.
Task reallocation: demand will fall for routine, automatable tasks and rise for complementary, cognitive, and governance tasks.
Task‑level decomposition and theoretical arguments about comparative advantage between AI and humans; no quantitative labor market estimates.
Overall, AI will be augmentative: many roles will transform rather than disappear; transition costs and task reallocation are the primary labor‑market challenges.
Synthesis of task‑based automation/complementarity analysis and scenario reasoning; paper explicitly notes lack of large‑sample causal evidence.
Within the next five years, AI will become an embedded, augmentative co‑pilot across software development and adjacent tech professions, shifting daily work from manual, task‑level activities to higher‑order, idea‑driven collaboration with intelligent systems.
Conceptual, forward‑looking analysis synthesizing current AI capability trends, illustrative examples of existing AI assistants, and scenario reasoning; no empirical longitudinal data or sample size reported.
Improved anomaly detection and auditability can reduce some operational risks, but opaque or mis-specified models create model risk, systemic forecasting correlations, and regulatory concerns requiring transparency and validation standards.
Risk assessment presented qualitatively in the paper, pointing to trade-offs between better detection and new model risks; no incident-level operational risk data or quantitative risk analysis included.
Labor demand will shift toward analytics, data engineering, and AI governance roles in finance while routine reporting roles may be automated or re-tasked.
Workforce-impact claim based on mechanization/automation logic in the paper; no labor-market empirical analysis, occupation-level employment data, or causal estimates are provided.
Simulations with heterogeneous workers reproduce the analytical predictions and show sharp divergence in outcomes across the two regimes.
Numerical simulation exercises using a heterogeneous-agent calibration reported in the paper; exact sample/calibration details referenced in the numerical section (not provided in the summary).
Distributional outcomes hinge on institutional/allocation factors (ownership, bargaining power) that determine who controls organizational elasticity and thus who captures coordination rents.
Model mechanism and comparative statics showing that varying the allocation of coordination benefits changes equilibrium distributional outcomes; policy/interpretive discussion linking this to institutions.
There is a regime fork: the same coordination-compressing technology can yield either broad-based gains (widespread wage/output increases) or superstar concentration (concentration of gains among few agents), depending on who captures the coordination rents (who controls organizational elasticity).
Analytical characterization of comparative static equilibria and numerical simulations with heterogeneous agents demonstrating two distinct regimes when varying parameters that capture allocation of coordination benefits (organizational elasticity control).
Short-run accounting and measurement approaches may miss long-run gains from improved decision quality or fraud reduction attributable to digital/AI systems.
Conceptual discussion and selected longitudinal case examples in the literature; the review highlights measurement horizons as a methodological limitation.
AI is capital–skill complementary in the public sector: returns to AI investments depend critically on workforce capabilities and managerial practices.
Theoretical arguments and some empirical/case evidence cited in the review indicating complementarities between technology and skills/management; systematic quantification across contexts is limited.
In practice these productivity gains are frequently muted or uneven across contexts.
Across reviewed literature, multiple case studies and evaluations report mixed or limited net productivity improvements; review notes heterogeneity by country, sector, and maturity of implementation. No pooled causal estimates available.
Automation bias and changing work processes imply re‑skilling needs for public servants and potential shifts in public sector employment composition.
Findings and recommendations in multiple studies within the review documenting automation effects on workflows and workforce skill requirements (from the 103‑item corpus).
Predictive governance can change fiscal timing (earlier interventions) and alter uncertainty profiles for public budgets, requiring economists to model dynamic fiscal impacts and risks from algorithmic failure or bias.
Implication drawn in the review from case studies and economic reasoning present in the literature; recommendation for fiscal modeling based on synthesized evidence across the 103 items.
Interoperability and ethical‑by‑design requirements influence vendor lock‑in, competition, and the emergence of platform providers in markets for public‑sector AI solutions.
Policy and market analyses within the reviewed literature that link technical standards and ethical design requirements to market structure and vendor dynamics (synthesized from the 103 items).
Predictive analytics and AI enable anticipatory policy design (early intervention, forecasting), but they raise normative and governance questions about acceptable levels of prediction‑driven intervention.
Thematic findings from the review's mapping of predictive analytics use cases and accompanying ethical/governance discussions across the 103‑item corpus.
Human–AI interaction issues—such as automation bias and shifting public servant roles—affect decision quality and legitimacy, creating a need for human‑in‑the‑loop processes.
Multiple empirical and theoretical contributions in the reviewed literature identified automation bias and role shifts; recommendation for human‑in‑the‑loop emerges from synthesis of these studies.
Legal frameworks like the EU GDPR provide a useful normative benchmark, but their protections do not automatically translate across jurisdictions; cross‑border research encounters gaps and asymmetries in enforcement and rights.
Normative and legal analysis contrasting GDPR principles with the Chilean/regional regulatory context and observed cross‑border data flow practices in the case study.
State-level divergence in AI-related regulation will create geographic heterogeneity in adoption costs and labor protections, potentially inducing firm and worker sorting across states and making national inference about AI’s effects more difficult.
Comparative policy review across states described in the commentary; inferential claim without presented empirical migration or firm-location data.
Regulatory uncertainty (rollbacks and a patchwork of rules) can raise compliance and political risk costs, causing some firms to accelerate private governance and self-regulation while causing others to delay investment or relocate activities.
Theoretical and policy reasoning based on review of regulatory signals and firm behavior literature; no empirical firm-level study or sample provided in the commentary.
Regulatory volatility and fragmentation will shape firms’ AI investment decisions, firms’ workplace practices (surveillance, task allocation), and the distributional consequences of AI for wages, employment and bargaining power.
Analytic synthesis linking observed policy instability and jurisdictional patchwork to likely firm responses and labor-market outcomes; conceptual inference rather than causal empirical evidence.
Economic outcomes of healthcare AI depend critically on governance design: policies and technical architectures (e.g., federated learning, certification standards, tiered risk management) will determine whether mixed open/proprietary ecosystems yield broad welfare gains or entrench inequities and concentrated market power.
High-level economic reasoning and synthesis of empirical and theoretical literature on governance, market structure, and technology adoption; prescriptive conclusion based on aggregated evidence rather than causal testing within the paper.
Reliable, well-integrated AI may raise clinical productivity and shift labor toward higher-value tasks, but misaligned deployments risk increased administrative burden (e.g., appeals, oversight).
Mixed evidence from pilot studies, observational reports, and stakeholder feedback synthesized in the paper; heterogeneity across settings and limited long-term outcome data noted.
Proprietary models concentrate costs into vendor payments and can potentially lower internal operational burden for providers.
Industry reports and economic synthesis comparing vendor-managed proprietary offerings with self-managed alternatives; based on reported vendor pricing models and operational roles.
Open-source lowers licensing fees but can shift costs toward in-house engineering, governance, and validation.
Cost-structure analyses and industry reports aggregated in the synthesis comparing licensing vs. internal operational costs across deployment models.
Open-source models show narrow but growing parity with proprietary models on some diagnostic tasks.
Synthesis of peer-reviewed comparative studies and benchmark reports indicating comparable diagnostic accuracy in limited tasks; authors note heterogeneity across studies and lack of long-term clinical trials.
Automation displaces some routine jobs but creates demand for roles in programming, data science, system maintenance, and higher‑order cognitive tasks.
Synthesis of labor‑market literature and sectoral case studies summarized in the review; relies on secondary empirical studies rather than new microdata analysis; sample sizes and study designs vary by referenced work.
Potential policy levers include mandatory provenance metadata, liability rules, taxes/subsidies to internalize harms, antitrust actions to limit concentration, and funding for public verification tools; each policy choice will shape incentives, innovation rates and market outcomes.
Policy options and scenario analysis summarized from legal/policy literature; presented as hypothetical levers rather than empirically tested interventions.
Economic returns may shift toward owners of data, model capacity and verification technology, while traditional creators may demand new compensation mechanisms (e.g., data-use royalties, collective licensing).
Conceptual economic analysis and synthesis of stakeholder- and rights-based literature in the narrative review.
Abundant synthetic media may erode the signaling value of standard digital content and create demand for authentication services, certification markets and premium 'human-made' labels.
Conceptual analysis grounded in signaling and market-for-authenticity literature reviewed in the paper (no primary WTP studies included).
Large productivity gains in content production could reduce marginal costs and compress prices for many creative goods, potentially displacing some human labor while raising demand for high-skill oversight, curation and novel creative inputs.
Economic reasoning and literature review on automation/productivity effects; no new empirical estimates presented (narrative inference).
Social acceptance is uncertain: some studies find people may rate AI-generated content equal or superior to human-created content, while proliferation of artificial media could also spur distrust or rejection of digital media.
Cited empirical studies on content perception and trust summarized in the narrative review (no primary data; exact sample sizes and studies vary by citation).
If consumers prefer AI-generated content, demand shifts could lower prices and increase consumption volume for certain media types; alternatively, trust erosion could reduce overall demand for digital content.
Reference to empirical studies with mixed results (paper notes 'some studies show higher ratings for AI content') and economic scenario modeling in the discussion; the paper does not report sample sizes or meta-analytic statistics.
Ambiguities in copyright and dataset licensing will affect value capture (original creators versus model operators) and may create new rent opportunities from provenance/authentication services or certified 'human-made' labels.
Legal and economic literature synthesized in the review, plus policy discussion; no empirical royalty or rent-share data provided.
Generative audiovisual models pose displacement risk for creative and production roles, but also create demand for new skills (prompt engineering, curation, verification) and complementarities in oversight and post-production.
Economic argumentation and citations to labor-impact literature and case examples in the review; no original labor-market empirical study or sample statistics provided.
Adoption frictions—integration costs, data access, reliability, and regulatory compliance—may slow diffusion of AI agents and create heterogeneity in economic value across firms and sectors.
Theoretical implication supported by observed orchestration and governance challenges in deployments; recommendation/interpretation rather than direct causal measurement.
Implementation heterogeneity (how guardrails, human oversight, and orchestration are configured) likely drives outcome variation across deployments.
Observed heterogeneity in Alfred AI deployments and stated limitation that configuration differences affect outcomes; based on deployment comparisons and qualitative analysis (sample size/configurations unspecified).
Net productivity gains may be smaller once indirect costs—governance, monitoring, error-correction, orchestration—are accounted for; standard productivity accounting should include these costs.
Conceptual argument supported by observational documentation of governance and monitoring burdens in deployments; no precise cost accounting reported in summary.