Evidence (2340 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Org Design
Remove filter
Embedding AI into workflows may change firm boundaries (e.g., outsourcing models vs. in‑house systems) and make investments in internal auditability and explainability strategic assets.
Theoretical implication drawn from synthesis of organizational boundary theory and practitioner trends; suggested rather than empirically demonstrated within the paper.
AI is likely to continue shifting the frontier of early discovery and increase the throughput and quality of hypotheses, but persistent biological uncertainty and the cost of clinical validation mean AI will complement—not fully replace—traditional R&D for the foreseeable future.
Synthesis of technological trends, application successes and limitations, translational risk, and economic reasoning presented throughout the paper.
Proprietary data, precompetitive consortia, and platform consolidation can create barriers to entry; public-data initiatives could alter competitive dynamics.
Market-structure analysis and discussion of data-access models in the paper, with examples of consortia and proprietary platform effects.
Expect strong returns-to-scale and winner-take-most dynamics: large incumbents and well-funded startups with proprietary data/compute may dominate the field.
Economic reasoning and observations in the paper about data/compute concentration, platform effects, and market outcomes.
Early-stage unit costs and time-per-hit can fall with AI, but late-stage clinical trial costs driven by biology remain the primary bottleneck to overall R&D productivity gains.
Qualitative assessment of stage-specific effects based on industry observations and conceptual decomposition of R&D stages; no new cost accounting or econometric estimates provided.
AI can improve specific stages of drug discovery but cannot eliminate fundamental biological uncertainty.
Conceptual and thematic analysis across technological capability and R&D integration levels; supported by illustrative examples showing limits of prediction in complex biology.
Many of the fundamental advantages and challenges studied in distributed computing also arise in LLM teams.
Empirical and/or conceptual analysis reported by the authors mapping distributed computing phenomena to LLM-team behavior (the excerpt states this finding but does not include the experimental details or metrics).
There is a design gap: developers' emphasized traits (politeness, strictness, imagination) differ from workers' preferred traits (straightforwardness, tolerance, practicality).
Comparison of developer and worker survey responses reported in the study (171 tasks; LM scaling to 10,131 tasks).
Human capital is no longer defined solely by formal education or accumulated experience; it increasingly takes the form of a multidimensional system in which cognitive abilities, digital competencies, social and communicative skills, and ethical awareness interact and reinforce one another.
Result of the paper's synthesis combining systemic analysis and comparative assessment of international practices; conceptual/qualitative evidence rather than quantified measurement across populations.
Ongoing digital transformation and the widespread adoption of artificial intelligence are reshaping the formation, structure, and practical use of human capital in modern economies.
Paper's core analytical conclusion based on systemic analysis, comparative assessment of international practices, and analytical generalization of organizational learning models; no primary quantitative sample size or experimental data reported.
Organizations must reconceptualize AI implementation as a fundamental redesign of work systems requiring new competencies, governance structures, and attention to human cognitive limits.
Normative recommendation based on the paper's synthesis of organizational adaptation literature and reported negative outcomes of current AI deployments; no empirical test of this prescriptive claim provided in the excerpt.
As compute costs decline, pro-price-competitive policies may lose their effectiveness in improving consumer surplus, while compute subsidies may shift from ineffective to effective.
Comparative statics within the theoretical model tracking how policy effects on consumer surplus change as the model parameter for compute cost is decreased.
Pro-quality-competitive policies increase the provider's profits while reducing the downstream firms' profits.
Model equilibrium analysis indicating that enhancing downstream quality competition shifts surplus toward the provider (higher provider profit) while lowering downstream firms' profits in the modeled equilibria.
Compute subsidies are effective at improving consumer surplus only when compute or data preprocessing costs are low.
Model analysis and comparative statics in the paper: introducing compute subsidies raises consumer surplus in parameter regions where compute/preprocessing costs are low.
Policies that promote price competition in downstream markets boost consumer surplus only when compute or data preprocessing costs are high.
Comparative-static results from the game-theoretic model showing that pro-price-competitive policy interventions increase consumer surplus under parameter regimes where compute or data preprocessing costs are high.
The maturity of an organization's data governance framework influences the success of AI and Big Data in lowering market uncertainty.
Findings from the qualitative case studies and overall analysis highlighting organizational data-governance maturity as a moderating factor (no standardized maturity measure or sample breakdown provided in the summary).
The stringency of the regulatory environment moderates how effectively AI and Big Data reduce market uncertainty.
Moderation identified via the study's analysis and case studies (specific regulatory measures and empirical tests not detailed in the summary).
The effectiveness of AI and Big Data in reducing market uncertainty is contingent upon industry type.
Observed variation across industries in the paper's qualitative case studies and analysis (the summary does not specify which industries or comparative sample sizes).
Technology adoption preferences correlate with structural role: central coordinators prefer predictive analytics while peripheral actors prioritize traceability systems.
Interview data tied to network positions produced reported preferences for types of technologies (predictive analytics vs. traceability systems) associated with different structural roles; analysis based on thematic coding and node-role mapping (sample details not in abstract).
Facilitated access to AI reconfigures startup roles, organizational structures, and decision routines.
Analytic findings from semi-structured interviews pointing to changes in role definitions, reporting lines, and decision-making routines after AI adoption (qualitative evidence; sample size not specified).
AI adoption generates different effects across different occupations.
Summary statement based on analysis of publicly available labor market data (occupational-level heterogeneity asserted but specific datasets, sample sizes, and methods not described).
AI is not an unprecedented disruption; its effects can be situated within established economic frameworks related to automation and task substitution.
Conceptual analysis comparing recent AI developments to historical automation and task-substitution frameworks; empirical grounding claimed via publicly available labor market and productivity data (details not provided).
Three developer archetypes are present: Enthusiasts, Pragmatists, and Cautious.
Classification/typology derived from the study's survey data of 147 developers (e.g., cluster analysis or thematic grouping) identifying three distinct groups based on usage patterns, attitudes, and intent.
Variations in prompt design influenced agents’ performance indicators, including response accuracy, task completion efficiency, coordination coherence, and error rates.
Experimental simulations with systematic variation of prompt designs and quantitative analysis of resulting performance indicators listed above. (Sample size, effect sizes, and statistical tests not specified in the provided excerpt.)
Knowledge democratization through AI may reduce educational inequality but may also exacerbate digital divides and erode universities' social mobility function.
Theoretical and socio-political analysis considering opposing effects; framed as a conditional/mixed outcome without empirical measurement reported in the paper.
AI displacement potential varies substantially across university functions.
Summary finding from the paper's comparative analysis of university functions; the paper provides ranked/percent estimates but does not report empirical sampling or statistical testing.
The impact of AI on supply chain stability in sports enterprises exhibits heterogeneity by enterprise type and profitability status.
Heterogeneity/subgroup analyses within the DML panel estimations (sample of 45 listed SEs, 2012–2023) showing differential AI effects across firm types and across firms with different profitability profiles.
There is significant variation in psychological readiness for AI across generational cohorts, industry sectors, and organizational maturity levels.
Aggregated findings from emerging AI–HRM empirical studies referenced in the paper (no specific study counts or sample sizes provided in the summary).
Each category of AI trigger presents distinct avenues for value creation alongside significant risks.
Analytical argument in the paper discussing potential benefits and risks per trigger type. No empirical evaluation, case studies, or quantitative evidence reported here.
More sophisticated AI-agent populations are not categorically better: whether increased sophistication helps or harms depends entirely on a single number—the capacity-to-population ratio—which can be known prior to deployment.
Combined empirical and mathematical findings in the paper showing that the effect of agent sophistication on collective outcomes is governed by the capacity-to-population ratio.
In the sentiment-analysis task, individual differences in user characteristics shape how users respond to AI explanations.
Results from the preregistered sentiment-analysis experiment reported in the paper indicating interaction effects between user characteristics and explanation types. (Exact sample size and statistical details not provided in the excerpt.)
This mainstream narrative about what AI is and what it can do is in tension with another emerging use case: entertainment.
Authors' conceptual argument contrasting dominant productivity-oriented narratives with observed/emerging entertainment uses; no quantified data in the excerpt.
The fast spread of artificial intelligence (AI) in U.S. organizations has radically altered the managerial decision-making process.
Statement based on a conceptual research design and integration of interdisciplinary literature (literature review). No empirical sample or quantitative data reported.
The increasing integration of artificial intelligence (AI) into organizational decision-making has fundamentally reshaped how managers analyze information, evaluate alternatives, and exercise judgment.
Synthesis of interdisciplinary literature presented in this conceptual meta-analysis; no primary empirical sample or quantitative effect sizes reported in the abstract (literature review basis).
AI adoption rates differ across countries and firm sizes.
Descriptive/empirical comparisons using AI diffusion indicators and firm-level data from the four named Central and Eastern European countries; heterogeneity by firm size reported.
AI productivity effects are not direct but conditional on organizational readiness.
Empirical analysis of firm-level data covering Serbia, Croatia, Czechia, and Romania combined with AI diffusion indicators; conditional/interaction analysis implied by framing (paper reports that productivity effects depend on organizational factors).
Smaller models augmented with curated Skills can match the performance of larger models without Skills (model–skill tradeoff).
Cross-size performance comparisons reported across seven agent–model configurations showing that certain smaller model + curated-Skill pairings achieve pass rates comparable to larger model baselines without Skills. Analysis uses the SkillsBench trajectories (7,308 total) to support tradeoff claims.
Implication for AI economics: scholars should be alert to epistemic capture—funding, institutional incentives, and geopolitical context can shape which AI governance and market theories gain traction.
Analogy and inference from the historical Cold War case study applied to contemporary AI economics; conceptual argument rather than direct empirical test in AI context.
The technological-form parameter (η1 vs. η0, i.e., proprietary vs. commodity) can independently flip the model across the inequality-increase/decrease boundary.
Model counterfactuals varying η1 versus η0 show that changing the degree of proprietary control over AI can move the calibrated model from one regime to the other.
At the calibrated baseline, the sign of the change in inequality (ΔGini) is determined mainly by one empirical moment (m6) together with the rent‑sharing elasticity ξ.
Results of the sensitivity decomposition and calibration reported in the paper indicating m6 and ξ primarily drive the sign of ΔGini in the baseline parameterization.
Students use GenAI as a co-designer and idea generator, which modifies workflow, decision points, and evaluative practices in their design process.
Qualitative interview data from architecture students; thematic analysis surfaced accounts of GenAI being used for ideation, variant generation, and as a collaborative partner (N unspecified).
Collaboration between architecture students and generative AI reshapes creative cognition in the architectural design process through algorithmic thinking strategies.
Semi-structured interviews with architecture students (interview sample size not specified) analyzed via inductive thematic analysis; authors synthesize recurring themes linking GenAI use to changes in cognitive strategies.
The taxonomy clarifies where substitution versus complementarity are likely: AI-assisted tasks imply partial substitution of routine work; AI-augmented applications generate complementarities that increase demand for higher cognitive skills; AI-automated systems shift labor toward monitoring, exception handling, and governance.
Inference from mapping the three interaction levels to observed case features (n=4) and application of the Bolton et al. framework in cross-case synthesis.
AI-augmented systems support real-time medical tasks (e.g., decision support during procedures), amplifying human judgment and speed but raising required cognitive skills and changing training and coordination practices.
Findings from the case(s) labeled AI-augmented in the four-case qualitative sample and cross-case interpretive analysis using the service-innovation framework.
DeFi components could enable automated milestone disbursement instruments but face regulatory and counterparty risk barriers.
Paper mentions DeFi as a potential disbursement automation mechanism and notes regulatory/counterparty risk; this is a conditional, context-dependent claim without pilot evidence for large-scale DeFi use.
High-quality labeled IoT traffic is scarce and valuable, and data-sharing mechanisms (federated learning coalitions, data marketplaces) could emerge but require privacy and legal frameworks.
Survey notes about dataset scarcity and potential economic models for data sharing; recommendation that privacy/legal frameworks are prerequisites.
There is a strong commercial opportunity for deployable ML-IDS tailored to IoT and edge deployments, but development and operational costs (data collection, compression, privacy, pipelines) are substantial.
Economic implications and market analysis drawn from the survey: unmet deployment needs, scarce labeled data, and additional engineering requirements imply market demand and higher costs.
Heterogeneous returns: returns to AI will vary across SMEs due to differences in managerial capabilities and local institutional contexts; targeting complementary capabilities may be more cost‑effective than uniform subsidies for hardware/software.
Theoretical conclusion drawn from integrating RBV, dynamic capabilities, and institutional theory across reviewed studies; supported by cited heterogeneity in the literature.
Sector-specific characteristics (regulation, competition intensity, product tangibility) shape the feasibility and design of VBP systems.
Thematic cluster from the SLR where sectoral factors were repeatedly cited as influencing VBP design across included studies.
Implementation challenges and pricing dynamics differ between B2B and B2C settings.
SLR thematic coding that separated findings and implementation considerations for B2B versus B2C contexts within the included literature.