Evidence (2469 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Org Design
Remove filter
AI displacement potential varies substantially across university functions.
Summary finding from the paper's comparative analysis of university functions; the paper provides ranked/percent estimates but does not report empirical sampling or statistical testing.
The impact of AI on supply chain stability in sports enterprises exhibits heterogeneity by enterprise type and profitability status.
Heterogeneity/subgroup analyses within the DML panel estimations (sample of 45 listed SEs, 2012–2023) showing differential AI effects across firm types and across firms with different profitability profiles.
There is significant variation in psychological readiness for AI across generational cohorts, industry sectors, and organizational maturity levels.
Aggregated findings from emerging AI–HRM empirical studies referenced in the paper (no specific study counts or sample sizes provided in the summary).
Each category of AI trigger presents distinct avenues for value creation alongside significant risks.
Analytical argument in the paper discussing potential benefits and risks per trigger type. No empirical evaluation, case studies, or quantitative evidence reported here.
More sophisticated AI-agent populations are not categorically better: whether increased sophistication helps or harms depends entirely on a single number—the capacity-to-population ratio—which can be known prior to deployment.
Combined empirical and mathematical findings in the paper showing that the effect of agent sophistication on collective outcomes is governed by the capacity-to-population ratio.
In the sentiment-analysis task, individual differences in user characteristics shape how users respond to AI explanations.
Results from the preregistered sentiment-analysis experiment reported in the paper indicating interaction effects between user characteristics and explanation types. (Exact sample size and statistical details not provided in the excerpt.)
This mainstream narrative about what AI is and what it can do is in tension with another emerging use case: entertainment.
Authors' conceptual argument contrasting dominant productivity-oriented narratives with observed/emerging entertainment uses; no quantified data in the excerpt.
The fast spread of artificial intelligence (AI) in U.S. organizations has radically altered the managerial decision-making process.
Statement based on a conceptual research design and integration of interdisciplinary literature (literature review). No empirical sample or quantitative data reported.
The increasing integration of artificial intelligence (AI) into organizational decision-making has fundamentally reshaped how managers analyze information, evaluate alternatives, and exercise judgment.
Synthesis of interdisciplinary literature presented in this conceptual meta-analysis; no primary empirical sample or quantitative effect sizes reported in the abstract (literature review basis).
AI adoption rates differ across countries and firm sizes.
Descriptive/empirical comparisons using AI diffusion indicators and firm-level data from the four named Central and Eastern European countries; heterogeneity by firm size reported.
AI productivity effects are not direct but conditional on organizational readiness.
Empirical analysis of firm-level data covering Serbia, Croatia, Czechia, and Romania combined with AI diffusion indicators; conditional/interaction analysis implied by framing (paper reports that productivity effects depend on organizational factors).
Smaller models augmented with curated Skills can match the performance of larger models without Skills (model–skill tradeoff).
Cross-size performance comparisons reported across seven agent–model configurations showing that certain smaller model + curated-Skill pairings achieve pass rates comparable to larger model baselines without Skills. Analysis uses the SkillsBench trajectories (7,308 total) to support tradeoff claims.
Implication for AI economics: scholars should be alert to epistemic capture—funding, institutional incentives, and geopolitical context can shape which AI governance and market theories gain traction.
Analogy and inference from the historical Cold War case study applied to contemporary AI economics; conceptual argument rather than direct empirical test in AI context.
The technological-form parameter (η1 vs. η0, i.e., proprietary vs. commodity) can independently flip the model across the inequality-increase/decrease boundary.
Model counterfactuals varying η1 versus η0 show that changing the degree of proprietary control over AI can move the calibrated model from one regime to the other.
At the calibrated baseline, the sign of the change in inequality (ΔGini) is determined mainly by one empirical moment (m6) together with the rent‑sharing elasticity ξ.
Results of the sensitivity decomposition and calibration reported in the paper indicating m6 and ξ primarily drive the sign of ΔGini in the baseline parameterization.
Students use GenAI as a co-designer and idea generator, which modifies workflow, decision points, and evaluative practices in their design process.
Qualitative interview data from architecture students; thematic analysis surfaced accounts of GenAI being used for ideation, variant generation, and as a collaborative partner (N unspecified).
Collaboration between architecture students and generative AI reshapes creative cognition in the architectural design process through algorithmic thinking strategies.
Semi-structured interviews with architecture students (interview sample size not specified) analyzed via inductive thematic analysis; authors synthesize recurring themes linking GenAI use to changes in cognitive strategies.
The taxonomy clarifies where substitution versus complementarity are likely: AI-assisted tasks imply partial substitution of routine work; AI-augmented applications generate complementarities that increase demand for higher cognitive skills; AI-automated systems shift labor toward monitoring, exception handling, and governance.
Inference from mapping the three interaction levels to observed case features (n=4) and application of the Bolton et al. framework in cross-case synthesis.
AI-augmented systems support real-time medical tasks (e.g., decision support during procedures), amplifying human judgment and speed but raising required cognitive skills and changing training and coordination practices.
Findings from the case(s) labeled AI-augmented in the four-case qualitative sample and cross-case interpretive analysis using the service-innovation framework.
DeFi components could enable automated milestone disbursement instruments but face regulatory and counterparty risk barriers.
Paper mentions DeFi as a potential disbursement automation mechanism and notes regulatory/counterparty risk; this is a conditional, context-dependent claim without pilot evidence for large-scale DeFi use.
High-quality labeled IoT traffic is scarce and valuable, and data-sharing mechanisms (federated learning coalitions, data marketplaces) could emerge but require privacy and legal frameworks.
Survey notes about dataset scarcity and potential economic models for data sharing; recommendation that privacy/legal frameworks are prerequisites.
There is a strong commercial opportunity for deployable ML-IDS tailored to IoT and edge deployments, but development and operational costs (data collection, compression, privacy, pipelines) are substantial.
Economic implications and market analysis drawn from the survey: unmet deployment needs, scarce labeled data, and additional engineering requirements imply market demand and higher costs.
Heterogeneous returns: returns to AI will vary across SMEs due to differences in managerial capabilities and local institutional contexts; targeting complementary capabilities may be more cost‑effective than uniform subsidies for hardware/software.
Theoretical conclusion drawn from integrating RBV, dynamic capabilities, and institutional theory across reviewed studies; supported by cited heterogeneity in the literature.
Sector-specific characteristics (regulation, competition intensity, product tangibility) shape the feasibility and design of VBP systems.
Thematic cluster from the SLR where sectoral factors were repeatedly cited as influencing VBP design across included studies.
Implementation challenges and pricing dynamics differ between B2B and B2C settings.
SLR thematic coding that separated findings and implementation considerations for B2B versus B2C contexts within the included literature.
Technology and AI are increasingly integrated into pricing processes, but this integration is uneven across contexts and the literature.
Thematic cluster from the SLR indicating growing but uneven mentions and treatments of technology/AI across included studies.
Paper‑based regulatory environments slow DT diffusion; digitised compliance and standardised data schemas can accelerate adoption and enable AI‑driven oversight.
Findings in the review noting regulatory friction and proposed solutions; supported by case evidence where digitisation of compliance facilitated digital workflows.
DT adoption is a socio‑technical transformation that requires governance, standards, collaborative delivery models, and workforce capability building — not just technology deployment.
Conceptual synthesis and cross‑study recommendations in the reviewed literature emphasizing organizational, contractual, and governance changes alongside technology.
Both initial trust and inertia have statistically significant effects on GAICS adoption decisions.
Inferential statistical tests reported in the quantitative phase indicating significant pathways from initial trust and from inertia to adoption outcome (exact effect sizes and sample size not provided in the abstract).
Organizations’ adoption of Generative AI–enabled CRM systems (GAICS) is driven by initial trust and inertia.
Quantitative inferential analysis in the study's second phase testing the conceptual model (paper reports statistically significant relationships between initial trust, inertia, and GAICS adoption). Sample size and sector/country scope not reported in the abstract.
There are incentives to develop privacy‑preserving ML (federated learning, split learning) and lightweight secure hardware for edge VR devices; public funding or prizes could accelerate adoption, whereas strict data‑localization constraints might slow innovation or shift R&D to lenient jurisdictions.
Policy and innovation incentives discussion synthesized from reviewed studies and economic reasoning; no empirical innovation rate or funding‑impact analysis presented.
AI adoption acts as a site of power reconfiguration: roles, relationships, and accountability structures shift as AI is integrated into workflows.
Qualitative workshop data from 15 UX designers describing anticipated or observed shifts in accountability and role boundaries; cross-scale thematic synthesis.
Discourses of efficiency carry ethical and social dimensions—responsibility, trust, and autonomy become central concerns when tools shift who does what and who is accountable.
Recurring themes from the 15 UX designers' discussions and design choices during workshops; thematic coding emphasized responsibility, trust, autonomy linked to efficiency claims.
At the team scale, adoption triggers negotiations over collaboration patterns, division of responsibility, and maintaining design rigor.
Group workshop activities and discussions among UX designers (n=15) where participants described team negotiation scenarios; team-level themes identified in analysis.
At the individual scale, designers expressed trade-offs among efficiency gains, opportunities for skill development, and feelings of professional value.
Individual- and small-group reflections in the 15-person workshop study; thematic coding highlighted these three recurring themes at the individual level.
Organizations frame AI adoption around competitiveness and efficiency, while workers (UX designers) weigh those efficiency framings against professional worth, learning, and autonomy.
Participants' reports during the qualitative design workshops (n=15) showing differences between organizational rhetoric and worker concerns.
Adoption outcomes depend on interactions among individual, team, and organizational incentives and norms (three analytic scales).
Cross-scale coding and synthesis of workshop data from 15 UX designers; analyses grouped themes into individual, team, and organizational scales.
Designers’ decisions about integrating AI reflect trade-offs between efficiency and social/ethical concerns (skill development, autonomy, accountability).
Workshop prompts and group discussions with 15 UX designers; thematic analysis identified recurring trade-off narratives between efficiency and professional/ethical considerations.
AI adoption reconfigures roles, responsibilities, trust, and power within organizations.
Qualitative data from design workshops with 15 UX designers; participants' reflections and group discussions coded using cross-scale thematic analysis (individual, team, organizational).
AI-to-AI communities on Moltbook exhibit discourse that is disproportionately introspective, ritualized in interaction, and affectively redirective, distinguishing it from typical human conversation.
Synthesis of empirical findings from topic modeling (concentrated self-reference), lexical/structural analyses (high formulaic comment rate), coherence metrics (rapid decay with depth), and emotion classification (low alignment, frequent affective redirection) on the 23-day Moltbook dataset.
Governance constraints induce measurable trade-offs between efficiency and compliance; the magnitude of these trade-offs depends on topology and system load.
Simulation experiments in the ablation study varied governance constraint parameters and load, measuring compliance rates and efficiency (value/throughput). Results show systematic reductions in efficiency as compliance constraints tighten, with the effect size modulated by graph topology and load levels.
AI agents are useful as breadth tools and for pre-deployment checks but lack the protocol-specific and adversarial reasoning required to replace human auditors; human-in-the-loop workflows are the best use.
Study observations: agents reliably flag well-known patterns and respond to human-provided context, but fail to perform robust end-to-end exploit generation and are sensitive to scaffolding and configuration.
Virtual–physical ecosystems and continuous validation raise new regulatory models (post-market surveillance, continuous certification), changing compliance costs and liability allocation.
Regulatory and safety implications raised in workshop panels and consensus recommendations captured in the workshop documentation (NSF workshop, Sept 26–27, 2024).
Human–AI collaboration frameworks will shift task allocation in clinical settings, affecting labor demand in clinical roles with potential for both complementarity and substitution effects.
Workshop discussion on systems/workflows and labor impacts from interdisciplinary participants (clinicians, researchers, industry) summarized in the report (NSF workshop, Sept 26–27, 2024).
Investment trade-offs exist between capital intensity (hardware co-design) and broader access; policy should balance platform funding with incentives for diversity and competition.
Workshop discussion and recommendation on funding trade-offs and policy implications from panels at the NSF workshop (Sept 26–27, 2024).
AI functions like a capital-augmenting technology that substitutes routine tasks while complementing creative and coordination tasks, altering the capital–labor mix and returns to different human capital types.
Conceptual framing and synthesis of literature and survey impressions; not directly tested empirically in the paper.
AI-driven automation will shift labor demand away from routine coding toward higher-order tasks (architecture, design, systems thinking, tool supervision), consistent with skill-biased technological change.
Theoretical implications drawn from observed substitution of routine tasks in literature and practitioner expectations in the survey; no labor-market causal analysis presented.
Benefits and uptake of AI tools are heterogeneous: they vary by team size, application domain (e.g., safety-critical vs. consumer software), and organizational process maturity.
Subgroup comparisons implied from survey (e.g., by role or domain) and literature examples; explicit subgroup sample sizes and statistical tests not provided in the summary.
AI augments developers rather than fully replacing them for complex, creative tasks; automation mainly substitutes routine work and complements higher-skill activities.
Synthesis of literature and survey responses indicating tool usage patterns and practitioner expectations about role changes; no experimental displacement studies reported.
If investing in a strong first-stage retriever is feasible, augmenting it with corpus-derived feedback can further improve outcomes; otherwise, LLM-generated feedback is the more economical default.
Experiments that varied first-stage retriever strength and compared downstream gains from corpus-derived versus LLM-generated feedback; combined with cost-effectiveness considerations.