Evidence (2340 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Org Design
Remove filter
The Trust–Complementarity Model of Collective Intelligence (TCM‑CI) explains how calibrated trust and complementary capability utilisation drive superior organisational performance.
Theoretical model proposed by the authors derived from systematic literature synthesis (conceptual/modeling contribution); abstract does not report empirical validation or sample size.
Cross-talk between distributed systems and LLM-team research yields rich practical insights.
Conclusion drawn by the authors based on their mapping and findings (qualitative claim supported by the paper's arguments and examples; excerpt lacks concrete metrics).
There is recent and increasing interest in forming teams of LLMs (LLM teams).
Claim made in the paper asserting increased interest and deployment at scale; supported in the paper by literature/contextual citations and reported deployments (specific numbers or studies not provided in the excerpt).
Sustainable human capital development requires coordinated interaction between education systems, employers, and public institutions.
Normative recommendation derived from the paper's systemic analysis and comparative review of institutional responses; no empirical policy evaluation or quantified cross-country causal analysis reported.
Alignment of educational strategies with labor market dynamics is necessary to support effective reskilling and upskilling.
Supported by comparative assessment of international practices and systemic analysis linking education strategies to labor market requirements; evidence is analytical rather than experimental or longitudinally quantified in the paper.
Effective reskilling and upskilling depend on the development of continuous learning ecosystems.
Analytical conclusion drawn from organizational learning models and international practice comparison; no controlled trials or quantitative evaluation of specific ecosystems reported.
As technological change accelerates, the ability of individuals and organizations to adapt becomes a central condition of economic resilience and long-term competitiveness.
Analytical generalization from organizational learning models and systemic analysis of labor-market dynamics; supported by comparative observations but not by a reported empirical causal study.
Upstream foundation model providers offering fine-tuning and inference services to downstream firms creates a co-creation dynamic that enhances model quality when downstream firms fine-tune models with proprietary data.
Conceptual claim and theoretical framing in the paper: description of an AI supply-chain interaction where providers supply compute/inference and downstream firms fine-tune with proprietary data; the paper posits this co-creation improves model quality as part of the motivating narrative.
Under pro-price-competitive policies or compute subsidies, the provider and downstream firms can achieve higher profits along with greater consumer surplus (a win-win-win outcome).
Equilibrium profit comparisons in the game-theoretic model showing that, in the parameter regions where these policies raise consumer surplus, both the upstream provider's profit and downstream firms' profits also increase relative to the baseline.
Policies that promote quality competition in downstream markets always improve consumer surplus.
Model outcomes: comparative-static and equilibrium results show that strengthening downstream quality competition monotonically increases consumer surplus across the parameter space considered in the paper.
Pro-price-competitive policies and compute subsidies are complementary: each is effective in different cost regimes and together can cover more cases.
Analytical results from the game-theoretic model showing complementary effectiveness across varying compute/preprocessing cost parameters (comparative statics demonstrating non-overlapping regions of effectiveness).
Holistic AI integration across supply chain functions yields greater performance benefits than isolated technological implementations.
Comparative analysis using survey and statistical methods (correlation/regression) on data from supply chain professionals; the summary reports superior outcomes for integrated (ecosystem-level) AI adoption versus isolated implementations, but does not provide the comparative metrics or sample breakdown.
AI-enabled performance management plays a mediating role that strengthens the linkage between strategic planning and operational outcomes.
Mediation analysis conducted on survey data from supply chain professionals (manufacturing and service sectors); the summary indicates a mediating effect of performance management but provides no mediation statistics (indirect effect size, confidence intervals) or sample size.
AI-enabled execution emerged as the strongest direct predictor of supply chain performance.
Regression analysis from the quantitative survey of supply chain professionals comparing AI-enabled planning, execution, and performance management as predictors of supply chain performance; specific coefficients, significance levels, and sample size are not reported in the excerpt.
AI integration significantly improved overall supply chain performance.
Quantitative study using data collected from supply chain professionals and analyzed with reliability testing, correlation, and regression methods; the provided text does not include sample size, p-values, or effect magnitudes.
AI integration significantly improved responsiveness (supply chain responsiveness).
Survey data from supply chain professionals across manufacturing and service sectors analyzed via correlation and regression analyses; the summary does not state sample size or numerical results.
AI integration significantly improved operational efficiency.
Quantitative survey of supply chain professionals (manufacturing and service sectors) with statistical analyses including reliability testing, correlation, and regression; specific sample size and effect sizes not provided in the summary.
AI integration significantly improved forecasting accuracy.
Quantitative survey of supply chain professionals (manufacturing and service sectors) analyzed using reliability testing and correlational/regression statistics; exact sample size and effect size not reported in the provided text.
The approach provides a closed-form mapping from information primitives to equilibrium outcomes.
Paper presents explicit formulas relating primitives (noise processes/Brownian shocks, signal-generation parameters, payoff matrices) to equilibrium objects (strategies, beliefs kernels, information wedge, and resulting payoffs).
The characterization yields an explicit information wedge V^i_t — a deterministic Volterra process — that prices the marginal value of shifting opponents' posteriors.
Derived closed-form expression in the paper: defines V^i_t as a deterministic Volterra-type process arising from the fixed-point solution; interprets it as the marginal value (price) of changing opponents' posterior beliefs.
This collapse reduces Nash equilibrium to a deterministic fixed point with no truncation and no large-population limit required.
Analytical reduction presented in the paper: after representing beliefs by deterministic kernels, the equilibrium conditions are expressed as a deterministic fixed-point problem solvable without approximations like truncating the belief hierarchy or taking N→∞.
Conditioning on primitive Brownian shocks (a dynamic analogue of Harsanyi's common-prior construction) collapses the infinite belief hierarchy onto deterministic two-time kernels.
Methodological derivation in the paper: change of conditioning variable from physical state to primitive Brownian shocks yields deterministic two-time kernel representation of agents' beliefs (i.e., belief dynamics become deterministic kernels rather than stochastic hierarchies).
We provide the first exact equilibrium characterization of finite-player continuous-time LQG games with endogenous signals.
Paper's constructive solution: derives an exact equilibrium by conditioning on primitive Brownian shocks and mapping the game to a deterministic fixed point; applies to finite number of players in continuous time with linear-quadratic-Gaussian structure and signals that depend on controls.
AI and Big Data enable proactive risk management strategies that contribute to lowering market uncertainty.
Qualitative case studies and quantitative analysis indicating firms used AI/Big Data for proactive risk management; details on number of cases or measurement of 'proactive risk management' not provided in the summary.
The reduction in market uncertainty occurs through enhanced predictive modeling capabilities enabled by AI and Big Data.
Findings reported in the paper attributing improved predictive modeling (from quantitative analysis and case-study observations) as a mechanism for uncertainty reduction (no specific metrics or effect sizes provided in the summary).
Strategic integration of AI and Big Data can significantly reduce market uncertainty during periods of economic turbulence.
Mixed-methods study combining quantitative analysis of market data and qualitative case studies of firms implementing AI and Big Data solutions (specific sample size and statistical details not provided in the summary).
The findings provide practical guidance for entrepreneurs on building adaptive, AI-integrated organizations by redefining hiring, decision processes, and learning practices.
Prescriptive recommendations derived from the interview analysis and observed patterns in the sample of entrepreneurs (qualitative grounding; specific examples or measured impacts not provided in the excerpt).
Hybrid decision architectures have emerged: startup-specific configurations where algorithmic reasoning and human judgment recursively interact to shape decisions, roles and routines.
Thematic synthesis of interview data identifying recurring patterns of human–AI recursive interaction in decision-related practices across the studied startups (qualitative evidence; no quantitative counts reported).
Entrepreneurs who founded startups after ChatGPT's release integrated AI into their post-release ventures.
Direct accounts from the subset of interviewees who founded startups after ChatGPT's release describing AI incorporation in those ventures (qualitative interview evidence; sample details not given).
AI is becoming embedded in the architecture of startups rather than serving only as a task-automation tool.
Interview data and qualitative analysis identifying patterns of AI integration across startup roles, routines and structures (derived from the same semi-structured interview sample; exact N not provided).
Facilitated access to AI following the release of ChatGPT is transforming how startups organize and make decisions.
Qualitative study using semi-structured interviews with entrepreneurs who founded startups both before and after ChatGPT's release and who integrated AI into their post-release ventures; thematic/qualitative analysis of interview data. (Sample size not reported in the provided excerpt.)
Education, reskilling, and institutional responses are important in shaping the economic outcomes of artificial intelligence.
Policy implication derived from the observed/modeled heterogenous effects of AI on occupations and productivity; presented as a normative recommendation rather than an empirically tested result in the provided text.
Productivity gains associated with AI may support long-term economic growth.
Reference to productivity data and growth theory linking productivity improvements to long-run growth; the paper states this as a potential outcome but does not provide quantified long-run estimates or empirical identification in the excerpt.
AI complements higher-skill labor.
Interpretation of labor market data patterns and theoretical task-complementarity arguments presented in the paper; empirical details (which datasets, estimation strategy, sample size) are not provided in the text excerpt.
Artificial intelligence is a skill-biased technological innovation.
Framing and argumentation in the paper situating AI within the skill-biased technical change literature; references to analyses of publicly available labor market and productivity data (sources, time periods, and sample sizes not specified in the text).
High current usage, breadth of application, frequent use of AI tools for testing, and ease of use correlate strongly with future intended adoption.
Correlational/regression analyses of survey variables (N=147) predicting respondents' stated future intention to increase AI tool use from measures of current usage, breadth of tool applications, frequency of testing-tool use, and perceived ease-of-use.
Developers report both productivity and quality gains from using AI tools.
Aggregate self-reported responses from 147 professional developers indicating perceived improvements in productivity and code quality associated with AI tool use.
There is no perceptual support for the Quality Paradox; PP is positively correlated with Perceived Code Quality (PQ) improvement.
Statistical analysis of survey measures (N=147) showing a positive correlation between respondents' Perceived Productivity scores and their Perceived Code Quality improvement scores; absence of evidence for a negative PP–quality relationship.
Frequent and broad AI tools use are the strongest correlates of both Perceived Productivity (PP) and quality, with frequency strongest.
Correlational analysis of self-reported survey responses from a sample of 147 professional developers measuring AI tool usage frequency and breadth and perceived outcomes (Perceived Productivity and Perceived Code Quality).
Adopting a standardised yet flexible approach to incentive design can help produce more reliable and generalizable knowledge in human–AI decision-making research.
Authors' argument/recommendation based on their thematic review and the proposed framework (this is a normative claim; no empirical validation provided in excerpt).
Human judgement remains paramount for high-stakes decision-making.
Assertion in the paper framing the motivation for human–AI collaboration research (based on prior literature and domain practice; no specific empirical data or sample sizes provided in excerpt).
AI has revolutionised decision-making across various fields.
Statement in paper's introduction summarizing prior work and trends (literature-level claim; no specific studies or sample sizes provided in excerpt).
Overall, the framework improves efficiency, fairness, and quality of care in hospital workforce management.
Aggregate conclusion drawn from experiments (forecasting metrics, scheduling conflict/fairness improvements, performance evaluation results, stress tests, and pilot deployment outcomes) described in the paper.
Pilot deployments of the framework demonstrated tangible benefits, including an 18% reduction in patient waiting times and a 14% improvement in satisfaction scores.
Reported outcomes from pilot deployments (real-world trials); the number of pilot sites, duration, patient/sample sizes, and baseline comparison methodology are not detailed in the provided text.
Stress tests confirmed scalability: solver times remained under 95 seconds for instances with 1,000 staff members.
Scalability/stress testing reported in the paper using scheduling solver on problem instances with up to 1,000 staff; hardware and solver configuration not specified in the excerpt.
The performance evaluation framework analysis revealed 74% positive patient feedback.
Reported result from NLP analysis of patient surveys in the experiments; the number of patient survey responses and timeframe are not provided in the excerpt.
The intelligent staff scheduling module reduces scheduling conflicts by 41% compared to conventional methods while improving fairness (Gini coefficient = 0.08).
Results from scheduling optimization experiments reported in the paper; comparison against unspecified 'conventional methods'; specific experimental sample sizes (number of staff/rosters used for the comparison) not provided in the excerpt.
Workforce demand forecasting using LSTM, XGBoost, and Random Forest models predicts patient admissions and staffing needs, with LSTM achieving the best performance (MAE = 6.1, R2 = 0.91).
Experimental comparison of ML models on synthetic and real hospital datasets; reported forecasting metrics MAE and R2 for LSTM (other models' metrics not quoted in the provided text). The specific dataset size and train/test splits are not reported in the excerpt.
AI-supported HR processes would have produced measurable increases in output per worker (labor productivity).
Counterfactual simulations and predictive estimates from the industrial firm dataset projecting output per worker under AI-HRM scenarios.
AI-HRM would have led to better alignment between training and production needs (improved targeting of training intensity to production requirements).
Model links training intensity to production outcomes and projects improved training–production alignment under AI-supported HR processes via regression-based simulations. (Quantitative magnitudes not specified in the description.)