The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (3062 claims)

Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 373 105 59 439 984
Governance & Regulation 366 172 115 55 718
Research Productivity 237 95 34 294 664
Organizational Efficiency 364 82 62 34 545
Technology Adoption Rate 293 118 66 30 511
Firm Productivity 274 33 68 10 390
AI Safety & Ethics 117 178 44 24 365
Output Quality 231 61 23 25 340
Market Structure 107 123 85 14 334
Decision Quality 158 68 33 17 279
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 88 31 38 9 166
Firm Revenue 96 34 22 152
Innovation Output 105 12 21 11 150
Consumer Welfare 68 29 35 7 139
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 71 10 29 6 116
Worker Satisfaction 46 38 12 9 105
Error Rate 42 47 6 95
Training Effectiveness 55 12 11 16 94
Task Completion Time 76 5 4 2 87
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 16 9 5 48
Job Displacement 5 29 12 46
Social Protection 19 8 6 1 34
Developer Productivity 27 2 3 1 33
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 8 4 9 21
Clear
Human Ai Collab Remove filter
The paper characterizes the information cost of aggregating preferences when AI can generate essentially unlimited candidate alternatives by providing tight sample-complexity bounds and lower bounds.
The combination of sampling-model formalization, sample-complexity upper bounds, and matching lower bounds constitutes a formal characterization of the information (sample) requirements.
high positive Finding Common Ground in a Sea of Alternatives sample/query complexity as the measure of information cost
The authors prove an upper bound on the number of samples/queries required by their algorithm as a function of accuracy, confidence, and problem parameters.
Theoretical analysis in the paper deriving explicit sample-complexity upper bounds (stated as functions of accuracy/confidence and relevant parameters).
high positive Finding Common Ground in a Sea of Alternatives sample/query complexity required for the algorithm to achieve specified accuracy...
Under only query (sampling) access to the unknown joint distribution of voters and alternatives, there is an efficient sampling-based algorithm that, with high probability, returns an alternative in the approximate proportional veto core.
Constructive algorithm and correctness proof in the paper showing the algorithm returns an approximate core alternative with high probability under the sampling access model.
high positive Finding Common Ground in a Sea of Alternatives probability that the algorithm's output lies in the approximate proportional vet...
The paper formalizes the proportional veto core for settings with an infinite alternative space and voters whose preferences are drawn from an unknown distribution.
Formal model and definitions presented in the paper: extension of the proportional veto core to an infinite alternative space and definitions for sampling-appropriate approximate proportional veto core.
high positive Finding Common Ground in a Sea of Alternatives formal definition / existence of an appropriate approximate proportional veto-co...
Temporally grounding model inputs (constraining models to contemporaneous public information at each node) substantially reduces the risk of training-data leakage and hindsight bias.
Study design enforced node-specific contemporaneous evidence constraints for each of the 11 nodes; methodological rationale and comparison to unconstrained settings described as reducing retrospective information contamination.
high positive When AI Navigates the Fog of War presence/absence or reduction of training-data leakage/hindsight bias (procedura...
BenchPreS can be used as an evaluative tool for mechanism designers and regulators to measure and compare models' context‑sensitivity to guide incentives, penalties, or certification regimes.
Methodological claim about the benchmark's applicability: BenchPreS produces MR and AAR metrics that can be used for comparisons; paper suggests use in policy/design contexts.
high positive BenchPreS: A Benchmark for Context-Aware Personalized Prefer... Usability of BenchPreS metrics (MR, AAR) for model comparison and regulatory eva...
BenchPreS provides a benchmark and evaluation protocol that systematically varies stored user preference, interaction partner (self vs third party), and normative requirement to assess appropriate suppression or application of preferences.
Dataset construction and evaluation procedure described: scenario generation varying preference, partner, and normative appropriateness; MR and AAR computed across the scenario set.
high positive BenchPreS: A Benchmark for Context-Aware Personalized Prefer... Benchmark coverage and experimental protocol (design dimensions: preference, par...
The paper advances a replicable interdisciplinary synthesis method and provides a simulated dataset and transparent protocols enabling other researchers to adapt the approach.
Methods section detailing systematic literature search protocols (ACM/IEEE/Springer, 2020–2024), inclusion criteria, simulation parameterization for the cross-sectoral dataset (seven industries, 2020–2024), and stated reproducibility materials.
high positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Availability and description of reproducible methods and a simulated dataset (re...
AI adoption is strongly associated with workforce skill transformation (reported correlation r = 0.71).
Correlational analysis reported in the paper using the simulated cross-sectoral dataset that mirrors employment trends across seven industries (Manufacturing, Healthcare, Finance, Education, Transportation, Retail, IT Services) over 2020–2024. This corresponds to sector-year observations (7 sectors × 5 years = 35 observations) and is triangulated with findings from a systematic literature synthesis (ACM, IEEE, Springer publications 2020–2024).
high positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Skill shift index (measure of changes in required skills and task composition)
The evaluation compared models on multiple metrics (accuracy, precision, recall, F1, AUC) across repeated trials and cross-company tests, and reported gains for AI methods across these metrics.
Evaluation protocol described: repeated trials, cross-validation, holdout sets, cross-company tests; reported performance improvements for AI models on the listed metrics.
high positive Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Classification evaluation metrics (accuracy, precision, recall, F1, AUC)
Ensemble methods and deep learning models show the largest and most consistent improvements in predictive performance relative to classic statistical models.
Aggregate results across repeated trials and evaluation metrics indicate Random Forests and Gradient Boosting (ensembles) and deep neural networks outperform linear/logistic regression and other baselines on the publicly available datasets used.
high positive Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Predictive performance (accuracy, F1, AUC, etc.)
Modern AI-driven prediction methods (especially ensemble models and deep neural networks) systematically outperform traditional statistical approaches at predicting job performance in publicly available workforce datasets.
Direct model comparison reported in the paper: baseline statistical models (linear/logistic regression) versus machine learning models (Random Forest, Gradient Boosting, SVM, deep neural networks) evaluated on multiple publicly available workforce datasets using cross-validation and holdout sets; performance reported on accuracy, precision, recall, F1, and AUC across repeated trials.
high positive Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Job performance prediction (classification performance metrics: accuracy, precis...
Research priorities include rigorous real-world trials assessing patient outcomes, cost-effectiveness, and labor impacts; comparative studies of integration strategies; measurement of long-run workforce effects; and development of standard metrics and monitoring frameworks.
Explicit recommendations from the narrative review based on identified gaps: scarcity of RCTs, economic analyses, and long-term workforce studies.
high positive Human-AI interaction and collaboration in radiology: from co... number and quality of real-world trials, existence of standardized monitoring fr...
Economists and researchers should measure organizational mediators (governance, mentoring practices, learning processes) alongside AI adoption and use empirical designs such as difference-in-differences with phased rollouts, randomized mentoring/training interventions, matched employer–employee panels, and IV exploiting exogenous shocks to innovation backing to identify causal effects.
Methodological recommendations and proposed empirical designs contained in the paper; no implementation or empirical results reported.
high positive Revolutionizing Human Resource Development: A Theoretical Fr... feasibility and validity of empirical identification strategies for causal effec...
The integrated framework links multi-level outcomes: micro (individual skills, task performance), meso (team coordination, workflows), and macro (organizational strategy, innovation, productivity) effects to adaptive structuration processes and affordance actualization.
Framework specification and theoretical mapping across levels in the conceptual paper; no empirical validation or sample.
high positive Revolutionizing Human Resource Development: A Theoretical Fr... individual skills and performance; team coordination and workflow quality; organ...
The paper develops a conceptual framework that integrates Adaptive Structuration Theory (AST) and Affordance Actualization Theory (AAT) to explain how effective human–AI collaboration can be structured within organizations.
Conceptual/theoretical synthesis and literature integration combining AST and AAT streams; no original empirical data or sample reported (theoretical development).
high positive Revolutionizing Human Resource Development: A Theoretical Fr... explanatory power / conceptual framework for human–AI collaboration
As the competition progressed, teams relied more on the AI for larger subtasks (increasing delegation and reliance).
Time-series instrumentation of AI interactions and participant behavior during the live CTF with 41 participants showing increased frequency and scope of delegated tasks later in the event.
high positive Understanding Human-AI Collaboration in Cybersecurity Compet... frequency of delegation and average scope/complexity of delegated tasks over com...
One autonomous agent finished second overall on the fresh challenge set.
Final ranking/scoreboard from benchmarking the four autonomous agents against the live CTF challenge set and human teams; agent achieved overall 2nd place.
high positive Understanding Human-AI Collaboration in Cybersecurity Compet... overall ranking (2nd place) on the challenge set
In a live onsite Capture-the-Flag (CTF) study (41 participants), human teams increasingly delegated larger subtasks to an instrumented AI as the competition progressed.
Empirical observation and instrumentation of AI interactions during a live, onsite CTF with 41 human participants/teams; delegation and task-size metrics tracked over time during the event.
high positive Understanding Human-AI Collaboration in Cybersecurity Compet... degree/size of subtasks delegated to the AI over time (delegation rate and subta...
Reward shaping at the assignment layer enables an explicit trade-off between diagnostic accuracy and human labor by incorporating penalties for human involvement.
Methodology section describing reward shaping and experimental comparisons showing different accuracy/human-effort trade-offs (results reported in paper; exact experimental details not provided in the summary).
high positive Hierarchical Reinforcement Learning Based Human-AI Online Di... diagnostic accuracy vs human effort (as controlled by reward shaping)
Masked reinforcement learning techniques constrain or mask action spaces, reducing exploration over huge symptom/action spaces.
Paper describes use of masked RL to limit action options during training and execution; used in both assignment and execution layers (methodological claim supported by algorithmic description and experiments).
high positive Hierarchical Reinforcement Learning Based Human-AI Online Di... action-space reduction / sample efficiency / learning stability (as applied to s...
The upper layer ('master') learns turn-by-turn human–machine assignment using masked reinforcement learning with reward shaping to balance accuracy and human cost.
Methodological description in the paper and empirical results from experiments using masked RL and reward-shaped objectives at the assignment layer (implementation and experimental setup reported; dataset/sample size not specified in summary).
high positive Hierarchical Reinforcement Learning Based Human-AI Online Di... assignment policy performance; human effort allocation; diagnostic accuracy unde...
Service empathy mediates the relationship between employee emotion and collaboration proficiency.
Mediation analysis conducted on the experimental sample (n = 861) showing that measured 'service empathy' accounts for (part of) the effect of employee emotion on collaboration proficiency.
high positive Adoption of AI partners in temporary tasks: exploring the ef... collaboration proficiency
The paper advances augmentation debates by articulating the leader’s practical role when decision lead‑agency shifts between humans and AI and by detailing systemic HR changes needed to sustain performance, legitimacy and well‑being.
Stated contribution of the conceptual synthesis comparing existing augmentation and leadership literatures and providing an HR‑focused framework; descriptive of the paper's intellectual contribution.
high positive Symbiarchic leadership: leading integrated human and AI cybe... clarity of leader role; specification of HR system changes
Core practice 4 — Embed governance: make accountability, bias testing, privacy safeguards, audit trails, escalation thresholds and human oversight explicit and routine.
Prescriptive governance practice grounded in literature on algorithmic accountability and risk management and in practitioner examples; presented without original empirical validation.
high positive Symbiarchic leadership: leading integrated human and AI cybe... bias incidence; privacy breaches; auditability and compliance metrics
Core practice 3 — Manage the human–AI relationship: build adoption, psychological safety and calibrated trust; address automation anxiety and misuse.
Framework recommendation synthesizing organizational‑psychology and technology adoption literature plus practitioner observations; not tested empirically in the paper.
high positive Symbiarchic leadership: leading integrated human and AI cybe... adoption rates; psychological safety; calibrated trust; misuse incidents
Core practice 2 — Treat AI outputs as hypotheses: require human sensemaking and validation rather than blind adoption of model outputs.
Prescriptive practice derived from reviewed research and practitioner cases emphasizing human oversight; presented as framework guidance rather than empirically validated intervention.
high positive Symbiarchic leadership: leading integrated human and AI cybe... decision quality; error rates; incidence of blind automation
Core practice 1 — Allocate work by comparative advantage: assign tasks to humans or AI based on relative strengths (e.g., speed, pattern detection, contextual judgement).
Conceptual component of the framework drawn from synthesis of empirical findings in prior human–AI and task allocation literature and practitioner examples; no new empirical testing in the paper.
high positive Symbiarchic leadership: leading integrated human and AI cybe... task assignment efficiency; productivity from task allocation
Research agenda priorities include: empirically quantifying the value of digital twins on R&D productivity; studying complementarities between AI tools and tacit sensory knowledge; measuring cultural translation costs; and analyzing market concentration risks from proprietary sensory models.
List of recommended empirical research directions derived from conceptual analysis and gap identification; no primary empirical work conducted within the paper itself.
high positive At the table with Wittgenstein: How language shapes taste an... future empirical metrics: R&D productivity changes, complementarity estimates, m...
The collection highlights resolving methodological challenges such as ecological validity, generalization across environments, and integrating domain knowledge rather than purely optimizing benchmarks.
Methodological-focus summary from the collection indicating emphasis on ecological validity, generalization, and domain-knowledge integration across multiple papers.
high positive Towards ‘digital ecology’: Advances in integrating artificia... methodological robustness (ecological validity, cross-site generalization, domai...
Early applications focused on automating straightforward, repetitive tasks (e.g., filtering blank camera‑trap images); current work aims for deeper integration with ecological questions.
Historical-arc observation drawn from the collection's examples and classifications of papers (descriptive review of prior vs. current papers in the collection).
high positive Towards ‘digital ecology’: Advances in integrating artificia... complexity and integration depth of AI applications in ecology (task automation ...
The AI–ecology interface is maturing from simple, task‑automation proofs of concept into genuinely interdisciplinary work that advances both AI methods and ecological science.
Synthesis of the paper collection (mix of methodological, empirical, and translational papers) and the paper's summary of trends across those contributions (no single-sample experiment; claim based on cross-paper review).
high positive Towards ‘digital ecology’: Advances in integrating artificia... advancement of AI methods and ecological science (depth of interdisciplinary int...
Seed 2.0 Lite achieved 75.7% success rate with-skill, an increase of +18.9 percentage points over baseline.
Model-specific reported result in the paper: Seed 2.0 Lite with-skill success rate (75.7%) and reported improvement (+18.9pp); reported from the benchmark runs.
high positive SKILLS: Structured Knowledge Injection for LLM-Driven Teleco... task success rate (percentage) and absolute percent-point lift
GLM-5 Turbo achieved 78.4% success rate with-skill, an increase of +5.4 percentage points over baseline.
Model-specific reported result in the paper: GLM-5 Turbo with-skill success rate (78.4%) and reported improvement (+5.4pp); based on the benchmark evaluation.
high positive SKILLS: Structured Knowledge Injection for LLM-Driven Teleco... task success rate (percentage) and absolute percent-point lift
Nemotron 120B achieved 78.4% success rate with-skill, an increase of +18.9 percentage points over baseline.
Model-specific reported result in the paper: Nemotron 120B with-skill success rate (78.4%) and reported improvement (+18.9pp); results drawn from the benchmark runs.
high positive SKILLS: Structured Knowledge Injection for LLM-Driven Teleco... task success rate (percentage) and absolute percent-point lift
MiniMax M2.5 achieved 81.1% success rate with-skill, an increase of +13.5 percentage points over baseline.
Model-specific reported result in the paper: MiniMax M2.5 with-skill success rate (81.1%) and reported improvement (+13.5pp); based on subset of the 185 scenario-runs across the evaluated models.
high positive SKILLS: Structured Knowledge Injection for LLM-Driven Teleco... task success rate (percentage) and absolute percent-point lift
Results across 5 open-weight model conditions and 185 scenario-runs show consistent skill lift across all models.
Aggregate experimental results reported in the paper: evaluation over 5 model conditions and 185 scenario-runs, with cross-model improvement when SKILL is provided.
high positive SKILLS: Structured Knowledge Injection for LLM-Driven Teleco... skill lift measured as change in task success rate (percentage point improvement...
AI-adopting firms increase R&D expenditures following adoption.
Firm financial data showing higher R&D spending for adopters relative to nonadopters in post-adoption periods using the diff-in-diff framework.
high positive AI and Productivity: The Role of Innovation R&D expenditures (absolute or relative change)
Post-adoption patents by AI adopters receive more citations than those of nonadopters.
Difference-in-differences estimates comparing citation counts per patent before and after AI installation versus nonadopters; patent citation data used as the dependent variable.
high positive AI and Productivity: The Role of Innovation citations per patent (average citation count)
Firms that adopt AI subsequently increase patenting relative to nonadopters.
Firm-level analysis using a novel AI adoption measure based on timing of AI product installations and a stacked difference-in-differences design exploiting staggered adoption; dependent variable = firm patent counts (patenting rate). (Sample size and exact time period not specified in the provided text.)
high positive AI and Productivity: The Role of Innovation firm patent counts / patenting rate
Programming experience significantly improved code security.
Association found in the study between participants' programming experience (general programming experience measured for each participant) and the security of their submitted code; statistical analysis in the sample (n = 159) showed a significant positive effect of experience on code security.
high positive The Impact of AI-Assisted Development on Software Security: ... code security (security quality of participants' solutions) as a function of pro...
Using distributed systems as a principled foundation is a useful approach for creating and evaluating LLM teams.
Primary methodological proposal of the paper; supported by conceptual argument and (per the paper) mappings between distributed-systems concepts and LLM team design (specific experimental validation not detailed in the excerpt).
high positive Language Model Teams as Distributed Systems suitability of distributed-systems framework for designing/evaluating LLM teams
Large language models (LLMs) are growing increasingly capable.
Statement in the paper's introduction/abstract summarizing the field; based on observed progress in LLM development cited by the authors (no experimental sample size provided in the excerpt).
high positive Language Model Teams as Distributed Systems capability of LLMs (general competence/capacity)
Only seven specialized skills produce meaningful gains (up to +30%).
Empirical results showing that 7 out of 49 skills yielded meaningful positive improvements in acceptance-test pass rates, with gains up to 30%.
high positive SWE-Skills-Bench: Do Agent Skills Actually Help in Real-Worl... number of skills with meaningful positive pass-rate gains and magnitude (up to +...
The average gain from injecting skills is only +1.2% in pass rate.
Aggregated pass-rate differences computed across the benchmark tasks comparing with-skill vs without-skill conditions, reported as an average +1.2% gain.
high positive SWE-Skills-Bench: Do Agent Skills Actually Help in Real-Worl... average change in acceptance-test pass rate (+1.2%)
Analysis of benchmark data (n = 667) reveals substantial synergy effects: Llama-3.1-8B improves human performance by 23 percentage points.
Empirical analysis of the same benchmark dataset (n = 667) using the Bayesian IRT model; reported improvement in human performance with Llama-3.1-8B assistance of +23 percentage points.
high positive Quantifying and Optimizing Human-AI Synergy: Evidence-Based ... human task performance (accuracy, measured in percentage points) when assisted b...
Analysis of benchmark data (n = 667) reveals substantial synergy effects: GPT-4o improves human performance by 29 percentage points.
Empirical analysis of a benchmark dataset of n = 667 using the paper's Bayesian IRT framework; reported improvement in human performance with GPT-4o assistance of +29 percentage points.
high positive Quantifying and Optimizing Human-AI Synergy: Evidence-Based ... human task performance (accuracy, measured in percentage points) when assisted b...
O artigo discute implicações gerenciais e de políticas públicas para reduzir fricção, acelerar adoção responsável e orientar investimentos em produtividade e inclusão.
Seção de discussão mencionada no resumo abordando encargos gerenciais e políticas públicas; não há avaliação empírica de políticas no resumo.
high positive A FRICÇÃO PSICOANTROPOLÓGICA (SCF - Symbolic-Cognitive Frict... recomendações e orientações para ação gerencial e políticas públicas visando red...
O artigo entrega instrumentos replicáveis — a escala SCF-30, um checklist de governança mínima de IA e uma matriz 30-60-90 dias — para uso prático.
Afirmação explícita no resumo de que instrumentos replicáveis são disponibilizados; presunção de inclusão dos instrumentos no corpo do artigo.
high positive A FRICÇÃO PSICOANTROPOLÓGICA (SCF - Symbolic-Cognitive Frict... disponibilidade de instrumentos operacionais (escala, checklist, matriz 30-60-90...
High-quality chatbots (96–100% accurate) improved caseworker accuracy by 27 percentage points.
Experimental result reported in paper: treatment with chatbots at 96–100% aggregate accuracy produced a 27 percentage-point increase in caseworker accuracy compared to control; based on the randomized experiment on the 770-question benchmark.
high positive LLMs in social services: How does chatbot accuracy affect hu... change in caseworker accuracy (percentage-point increase) when assisted by 96–10...