Evidence (3062 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Human Ai Collab
Remove filter
The paper characterizes the information cost of aggregating preferences when AI can generate essentially unlimited candidate alternatives by providing tight sample-complexity bounds and lower bounds.
The combination of sampling-model formalization, sample-complexity upper bounds, and matching lower bounds constitutes a formal characterization of the information (sample) requirements.
The authors prove an upper bound on the number of samples/queries required by their algorithm as a function of accuracy, confidence, and problem parameters.
Theoretical analysis in the paper deriving explicit sample-complexity upper bounds (stated as functions of accuracy/confidence and relevant parameters).
Under only query (sampling) access to the unknown joint distribution of voters and alternatives, there is an efficient sampling-based algorithm that, with high probability, returns an alternative in the approximate proportional veto core.
Constructive algorithm and correctness proof in the paper showing the algorithm returns an approximate core alternative with high probability under the sampling access model.
The paper formalizes the proportional veto core for settings with an infinite alternative space and voters whose preferences are drawn from an unknown distribution.
Formal model and definitions presented in the paper: extension of the proportional veto core to an infinite alternative space and definitions for sampling-appropriate approximate proportional veto core.
Temporally grounding model inputs (constraining models to contemporaneous public information at each node) substantially reduces the risk of training-data leakage and hindsight bias.
Study design enforced node-specific contemporaneous evidence constraints for each of the 11 nodes; methodological rationale and comparison to unconstrained settings described as reducing retrospective information contamination.
BenchPreS can be used as an evaluative tool for mechanism designers and regulators to measure and compare models' context‑sensitivity to guide incentives, penalties, or certification regimes.
Methodological claim about the benchmark's applicability: BenchPreS produces MR and AAR metrics that can be used for comparisons; paper suggests use in policy/design contexts.
BenchPreS provides a benchmark and evaluation protocol that systematically varies stored user preference, interaction partner (self vs third party), and normative requirement to assess appropriate suppression or application of preferences.
Dataset construction and evaluation procedure described: scenario generation varying preference, partner, and normative appropriateness; MR and AAR computed across the scenario set.
The paper advances a replicable interdisciplinary synthesis method and provides a simulated dataset and transparent protocols enabling other researchers to adapt the approach.
Methods section detailing systematic literature search protocols (ACM/IEEE/Springer, 2020–2024), inclusion criteria, simulation parameterization for the cross-sectoral dataset (seven industries, 2020–2024), and stated reproducibility materials.
AI adoption is strongly associated with workforce skill transformation (reported correlation r = 0.71).
Correlational analysis reported in the paper using the simulated cross-sectoral dataset that mirrors employment trends across seven industries (Manufacturing, Healthcare, Finance, Education, Transportation, Retail, IT Services) over 2020–2024. This corresponds to sector-year observations (7 sectors × 5 years = 35 observations) and is triangulated with findings from a systematic literature synthesis (ACM, IEEE, Springer publications 2020–2024).
The evaluation compared models on multiple metrics (accuracy, precision, recall, F1, AUC) across repeated trials and cross-company tests, and reported gains for AI methods across these metrics.
Evaluation protocol described: repeated trials, cross-validation, holdout sets, cross-company tests; reported performance improvements for AI models on the listed metrics.
Ensemble methods and deep learning models show the largest and most consistent improvements in predictive performance relative to classic statistical models.
Aggregate results across repeated trials and evaluation metrics indicate Random Forests and Gradient Boosting (ensembles) and deep neural networks outperform linear/logistic regression and other baselines on the publicly available datasets used.
Modern AI-driven prediction methods (especially ensemble models and deep neural networks) systematically outperform traditional statistical approaches at predicting job performance in publicly available workforce datasets.
Direct model comparison reported in the paper: baseline statistical models (linear/logistic regression) versus machine learning models (Random Forest, Gradient Boosting, SVM, deep neural networks) evaluated on multiple publicly available workforce datasets using cross-validation and holdout sets; performance reported on accuracy, precision, recall, F1, and AUC across repeated trials.
Research priorities include rigorous real-world trials assessing patient outcomes, cost-effectiveness, and labor impacts; comparative studies of integration strategies; measurement of long-run workforce effects; and development of standard metrics and monitoring frameworks.
Explicit recommendations from the narrative review based on identified gaps: scarcity of RCTs, economic analyses, and long-term workforce studies.
Economists and researchers should measure organizational mediators (governance, mentoring practices, learning processes) alongside AI adoption and use empirical designs such as difference-in-differences with phased rollouts, randomized mentoring/training interventions, matched employer–employee panels, and IV exploiting exogenous shocks to innovation backing to identify causal effects.
Methodological recommendations and proposed empirical designs contained in the paper; no implementation or empirical results reported.
The integrated framework links multi-level outcomes: micro (individual skills, task performance), meso (team coordination, workflows), and macro (organizational strategy, innovation, productivity) effects to adaptive structuration processes and affordance actualization.
Framework specification and theoretical mapping across levels in the conceptual paper; no empirical validation or sample.
The paper develops a conceptual framework that integrates Adaptive Structuration Theory (AST) and Affordance Actualization Theory (AAT) to explain how effective human–AI collaboration can be structured within organizations.
Conceptual/theoretical synthesis and literature integration combining AST and AAT streams; no original empirical data or sample reported (theoretical development).
As the competition progressed, teams relied more on the AI for larger subtasks (increasing delegation and reliance).
Time-series instrumentation of AI interactions and participant behavior during the live CTF with 41 participants showing increased frequency and scope of delegated tasks later in the event.
One autonomous agent finished second overall on the fresh challenge set.
Final ranking/scoreboard from benchmarking the four autonomous agents against the live CTF challenge set and human teams; agent achieved overall 2nd place.
In a live onsite Capture-the-Flag (CTF) study (41 participants), human teams increasingly delegated larger subtasks to an instrumented AI as the competition progressed.
Empirical observation and instrumentation of AI interactions during a live, onsite CTF with 41 human participants/teams; delegation and task-size metrics tracked over time during the event.
Reward shaping at the assignment layer enables an explicit trade-off between diagnostic accuracy and human labor by incorporating penalties for human involvement.
Methodology section describing reward shaping and experimental comparisons showing different accuracy/human-effort trade-offs (results reported in paper; exact experimental details not provided in the summary).
Masked reinforcement learning techniques constrain or mask action spaces, reducing exploration over huge symptom/action spaces.
Paper describes use of masked RL to limit action options during training and execution; used in both assignment and execution layers (methodological claim supported by algorithmic description and experiments).
The upper layer ('master') learns turn-by-turn human–machine assignment using masked reinforcement learning with reward shaping to balance accuracy and human cost.
Methodological description in the paper and empirical results from experiments using masked RL and reward-shaped objectives at the assignment layer (implementation and experimental setup reported; dataset/sample size not specified in summary).
Service empathy mediates the relationship between employee emotion and collaboration proficiency.
Mediation analysis conducted on the experimental sample (n = 861) showing that measured 'service empathy' accounts for (part of) the effect of employee emotion on collaboration proficiency.
The paper advances augmentation debates by articulating the leader’s practical role when decision lead‑agency shifts between humans and AI and by detailing systemic HR changes needed to sustain performance, legitimacy and well‑being.
Stated contribution of the conceptual synthesis comparing existing augmentation and leadership literatures and providing an HR‑focused framework; descriptive of the paper's intellectual contribution.
Core practice 4 — Embed governance: make accountability, bias testing, privacy safeguards, audit trails, escalation thresholds and human oversight explicit and routine.
Prescriptive governance practice grounded in literature on algorithmic accountability and risk management and in practitioner examples; presented without original empirical validation.
Core practice 3 — Manage the human–AI relationship: build adoption, psychological safety and calibrated trust; address automation anxiety and misuse.
Framework recommendation synthesizing organizational‑psychology and technology adoption literature plus practitioner observations; not tested empirically in the paper.
Core practice 2 — Treat AI outputs as hypotheses: require human sensemaking and validation rather than blind adoption of model outputs.
Prescriptive practice derived from reviewed research and practitioner cases emphasizing human oversight; presented as framework guidance rather than empirically validated intervention.
Core practice 1 — Allocate work by comparative advantage: assign tasks to humans or AI based on relative strengths (e.g., speed, pattern detection, contextual judgement).
Conceptual component of the framework drawn from synthesis of empirical findings in prior human–AI and task allocation literature and practitioner examples; no new empirical testing in the paper.
Research agenda priorities include: empirically quantifying the value of digital twins on R&D productivity; studying complementarities between AI tools and tacit sensory knowledge; measuring cultural translation costs; and analyzing market concentration risks from proprietary sensory models.
List of recommended empirical research directions derived from conceptual analysis and gap identification; no primary empirical work conducted within the paper itself.
The collection highlights resolving methodological challenges such as ecological validity, generalization across environments, and integrating domain knowledge rather than purely optimizing benchmarks.
Methodological-focus summary from the collection indicating emphasis on ecological validity, generalization, and domain-knowledge integration across multiple papers.
Early applications focused on automating straightforward, repetitive tasks (e.g., filtering blank camera‑trap images); current work aims for deeper integration with ecological questions.
Historical-arc observation drawn from the collection's examples and classifications of papers (descriptive review of prior vs. current papers in the collection).
The AI–ecology interface is maturing from simple, task‑automation proofs of concept into genuinely interdisciplinary work that advances both AI methods and ecological science.
Synthesis of the paper collection (mix of methodological, empirical, and translational papers) and the paper's summary of trends across those contributions (no single-sample experiment; claim based on cross-paper review).
Seed 2.0 Lite achieved 75.7% success rate with-skill, an increase of +18.9 percentage points over baseline.
Model-specific reported result in the paper: Seed 2.0 Lite with-skill success rate (75.7%) and reported improvement (+18.9pp); reported from the benchmark runs.
GLM-5 Turbo achieved 78.4% success rate with-skill, an increase of +5.4 percentage points over baseline.
Model-specific reported result in the paper: GLM-5 Turbo with-skill success rate (78.4%) and reported improvement (+5.4pp); based on the benchmark evaluation.
Nemotron 120B achieved 78.4% success rate with-skill, an increase of +18.9 percentage points over baseline.
Model-specific reported result in the paper: Nemotron 120B with-skill success rate (78.4%) and reported improvement (+18.9pp); results drawn from the benchmark runs.
MiniMax M2.5 achieved 81.1% success rate with-skill, an increase of +13.5 percentage points over baseline.
Model-specific reported result in the paper: MiniMax M2.5 with-skill success rate (81.1%) and reported improvement (+13.5pp); based on subset of the 185 scenario-runs across the evaluated models.
Results across 5 open-weight model conditions and 185 scenario-runs show consistent skill lift across all models.
Aggregate experimental results reported in the paper: evaluation over 5 model conditions and 185 scenario-runs, with cross-model improvement when SKILL is provided.
AI-adopting firms increase R&D expenditures following adoption.
Firm financial data showing higher R&D spending for adopters relative to nonadopters in post-adoption periods using the diff-in-diff framework.
Post-adoption patents by AI adopters receive more citations than those of nonadopters.
Difference-in-differences estimates comparing citation counts per patent before and after AI installation versus nonadopters; patent citation data used as the dependent variable.
Firms that adopt AI subsequently increase patenting relative to nonadopters.
Firm-level analysis using a novel AI adoption measure based on timing of AI product installations and a stacked difference-in-differences design exploiting staggered adoption; dependent variable = firm patent counts (patenting rate). (Sample size and exact time period not specified in the provided text.)
Programming experience significantly improved code security.
Association found in the study between participants' programming experience (general programming experience measured for each participant) and the security of their submitted code; statistical analysis in the sample (n = 159) showed a significant positive effect of experience on code security.
Using distributed systems as a principled foundation is a useful approach for creating and evaluating LLM teams.
Primary methodological proposal of the paper; supported by conceptual argument and (per the paper) mappings between distributed-systems concepts and LLM team design (specific experimental validation not detailed in the excerpt).
Large language models (LLMs) are growing increasingly capable.
Statement in the paper's introduction/abstract summarizing the field; based on observed progress in LLM development cited by the authors (no experimental sample size provided in the excerpt).
Only seven specialized skills produce meaningful gains (up to +30%).
Empirical results showing that 7 out of 49 skills yielded meaningful positive improvements in acceptance-test pass rates, with gains up to 30%.
The average gain from injecting skills is only +1.2% in pass rate.
Aggregated pass-rate differences computed across the benchmark tasks comparing with-skill vs without-skill conditions, reported as an average +1.2% gain.
Analysis of benchmark data (n = 667) reveals substantial synergy effects: Llama-3.1-8B improves human performance by 23 percentage points.
Empirical analysis of the same benchmark dataset (n = 667) using the Bayesian IRT model; reported improvement in human performance with Llama-3.1-8B assistance of +23 percentage points.
Analysis of benchmark data (n = 667) reveals substantial synergy effects: GPT-4o improves human performance by 29 percentage points.
Empirical analysis of a benchmark dataset of n = 667 using the paper's Bayesian IRT framework; reported improvement in human performance with GPT-4o assistance of +29 percentage points.
O artigo discute implicações gerenciais e de políticas públicas para reduzir fricção, acelerar adoção responsável e orientar investimentos em produtividade e inclusão.
Seção de discussão mencionada no resumo abordando encargos gerenciais e políticas públicas; não há avaliação empírica de políticas no resumo.
O artigo entrega instrumentos replicáveis — a escala SCF-30, um checklist de governança mínima de IA e uma matriz 30-60-90 dias — para uso prático.
Afirmação explícita no resumo de que instrumentos replicáveis são disponibilizados; presunção de inclusão dos instrumentos no corpo do artigo.
High-quality chatbots (96–100% accurate) improved caseworker accuracy by 27 percentage points.
Experimental result reported in paper: treatment with chatbots at 96–100% aggregate accuracy produced a 27 percentage-point increase in caseworker accuracy compared to control; based on the randomized experiment on the 770-question benchmark.