The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2954 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Human Ai Collab Remove filter
The top four models are statistically indistinguishable (mean score 0.147–0.153) while a clear tier gap separates them from the remaining four models (mean score <= 0.113).
Reported mean performance scores across 8 models and statement of statistical indistinguishability for the top four vs lower-tier four; numerical means provided.
high mixed SWE-PRBench: Benchmarking AI Code Review Quality Against Pul... mean model performance score
Behavioral factors — specifically trust calibration, cognitive load, and affective reactions — shape the transition of corporate AI initiatives from pilot deployments to scalable, sustained use.
Synthesis of human-AI interaction literature integrated with adoption frameworks (TAM and TOE); conceptual linkage rather than new empirical testing in this paper.
high mixed Behavioral Factors as Determinants of Successful Scaling of ... success of pilot-to-production transition (scalability and sustained use)
AI accelerates value-chain maturation while creating distinct risks — including professional responsibility tensions and potential system-level externalities.
Conceptual argument and risk analysis in the Article (theoretical reasoning and synthesis of management/ethics literature). No empirical causal estimate reported in the excerpt.
high mixed Rewired: Reconceptualizing Legal Services for the AI Age acceleration of value-chain maturation and emergence of professional responsibil...
The legal profession is at a crossroads, caught between intensifying fears of AI-driven displacement and a generational opportunity for transformation.
Author's synthesis and framing in the Article (conceptual assessment; literature/contextual synthesis). No empirical sample or experiment reported in the excerpt.
high mixed Rewired: Reconceptualizing Legal Services for the AI Age risk of AI-driven displacement and opportunity for transformation in the legal p...
This advantage is contingent upon robust AI governance, ethical frameworks, and the transition from 'pilot-lite' projects to integrated, data-driven 'AI-first' business models.
Conditional claim in the paper linking success to governance, ethics, and organizational integration; appears to be normative/analytical rather than empirical in the abstract.
high mixed The AI Advantage: Strategic Innovation and Global Expansion ... dependency of AI-driven advantage on governance, ethics, and organizational inte...
Actual sharing often contradicted willingness to share (the privacy paradox), with consistently high sharing rates across all conditions.
Observed discrepancy reported in the experimental results (N=240): despite variation in willingness-to-share, behavioral sharing rates remained high and similar across human, white-box AI, and black-box AI conditions.
high mixed Understanding Data-Sharing with AI Systems: The Roles of Tra... discrepancy between stated willingness to share vs actual sharing behavior
Machine-readable metrics and open scholarly infrastructure are reshaping scholarly profiles and incentives.
Conceptual and historical discussion referring to platforms and metrics (e.g., arXiv, Google Scholar, ORCID) as mechanisms changing incentives; no new empirical estimates provided.
high mixed A Brief History of AI for Scientific Discovery: Open Researc... changes in scholarly incentives and profile construction due to machine-readable...
That interconnected ecosystem is fundamentally restructuring who can do science (access), how fast discoveries propagate, and what counts as a valid scientific contribution.
Argumentative claim linking infrastructural and tool changes to changes in access, dissemination speed, and norms of contribution. The paper presents examples and narrative but no systematic empirical evaluation or sample.
high mixed A Brief History of AI for Scientific Discovery: Open Researc... access to scientific practice, speed of discovery dissemination, and norms of sc...
The most consequential development is not any single tool but the emergence of an interconnected ecosystem—AI agents, preprint platforms, open source codebases, and citation infrastructure—that forms a feedback loop.
Synthesis/argument based on multiple examples (LLM agents, preprint servers like arXiv, open-source code repositories, citation indices). No quantitative measurement or causal identification reported.
high mixed A Brief History of AI for Scientific Discovery: Open Researc... emergence of an interconnected scientific infrastructure ecosystem
The central tension in AI for science is between automation (building systems that replace human researchers) and augmentation (tools that amplify human creativity and judgement).
Analytical claim based on the paper's review of historical examples and conceptual discussion; no primary data or experimental design reported.
high mixed A Brief History of AI for Scientific Discovery: Open Researc... relationship between automation and augmentation in research practice
Science has repeatedly delegated its bottlenecks to machines—first inference, then search, then measurement, then the full workflow—and each delegation solves one problem while exposing a harder one underneath.
Interpretive historical argument drawing on examples across AI-for-science milestones (e.g., DENDRAL, search and inference systems, measurement automation, and contemporary end-to-end workflows). No quantitative sample or experimental method reported.
high mixed A Brief History of AI for Scientific Discovery: Open Researc... pattern of delegation and emergent bottlenecks in research workflows
Testing revealed AI excels at computational tasks but consistently misses nuanced factors like new construction rent premiums and infrastructure proximity impacts, validating the framework's hybrid structure as essential for professional-grade underwriting.
Findings from the controlled ChatGPT-4 test on the single 150-unit scenario: qualitative and comparative observations showing AI handled computations well but failed to capture specific local-market nuances, leading authors to endorse a hybrid human-AI framework.
Phase Two requires human-led professional validation to correct AI limitations, apply local market knowledge, and integrate risk factors.
Framework description supported by observations from the controlled test where human review was used to correct AI outputs and apply local knowledge (e.g., adjusting for nuanced market factors).
AI assistance in safety engineering is fundamentally a collaboration design problem rather than merely a software procurement decision: the same tool can either degrade or improve analysis quality depending entirely on how it is used.
Synthesis of the formal framework and analytic results in the paper (theoretical argument; no empirical sample reported).
The paper concludes by discussing open challenges in evaluating harmful manipulation by AI models.
Paper includes a discussion/conclusion section enumerating open challenges; stated in abstract.
high mixed Evaluating Language Models for Harmful Manipulation identification of open research and evaluation challenges
We identify significant differences across our tested geographies, suggesting that AI manipulation results from one geographic region may not generalise to others.
Empirical comparison across three locales (US, UK, India) showing statistically significant differences in manipulation outcomes by geography.
high mixed Evaluating Language Models for Harmful Manipulation geographic variation in manipulative behaviour/effects
Context matters: AI manipulation differs between domains, suggesting that it needs to be evaluated in the high-stakes context(s) in which an AI system is likely to be used.
Comparative analysis across three domains (public policy, finance, health) showing differences in manipulative behaviour and/or impact by domain in the empirical study.
high mixed Evaluating Language Models for Harmful Manipulation variation in manipulative behaviour/effects across use domains
AUROC_2 and M-ratio produce fully inverted model rankings, demonstrating these metrics answer fundamentally different evaluation questions.
Metric comparison across models showing that AUROC_2-based ranking and M-ratio-based ranking are fully inverted in the reported results on the evaluated dataset.
high mixed Do LLMs Know What They Know? Measuring Metacognitive Efficie... model ranking by AUROC_2 versus model ranking by M-ratio
Temperature manipulation shifts Type-2 criterion while meta-d' remains stable for two of four models, dissociating confidence policy from metacognitive capacity.
Experimental manipulation (temperature changes) applied to models; reported result that Type-2 criterion shifted with temperature while meta-d' was stable for two models (out of four) in the 224,000-trial dataset.
high mixed Do LLMs Know What They Know? Measuring Metacognitive Efficie... Type-2 criterion (confidence policy) and meta-d' (metacognitive capacity)
Metacognitive efficiency is domain-specific, with different models showing different weakest domains, invisible to aggregate metrics.
Domain-level analyses reported in the paper showing per-domain M-ratio results and identification of different weakest domains per model, contrasted with aggregate metric behavior.
high mixed Do LLMs Know What They Know? Measuring Metacognitive Efficie... domain-specific metacognitive efficiency (M-ratio) across task domains
Metacognitive efficiency varies substantially across models even when Type-1 sensitivity is similar — Mistral achieves the highest d' but the lowest M-ratio.
Empirical comparison of Type-1 sensitivity (d') and metacognitive efficiency (M-ratio) across the four evaluated LLMs on the 224,000 QA trials; explicit statement that Mistral had highest d' but lowest M-ratio.
high mixed Do LLMs Know What They Know? Measuring Metacognitive Efficie... Type-1 sensitivity (d') and metacognitive efficiency (M-ratio)
Organizational culture and technological readiness moderate the effectiveness of generative AI integration in decision-making processes.
The paper reports moderation effects tested in the SEM framework using survey data from senior managers, decision-makers, and AI adoption specialists (SmartPLS). No numeric moderator effect sizes or sample size provided in the excerpt.
high mixed The Strategic Impact of Generative Artificial Intelligence o... effectiveness of generative AI integration in decision-making (moderation effect...
Implementation of human-replacing technologies leads to significant transformations in skill demand: it reduces reliance on low-skilled labour while increasing demand for qualified engineers, system operators and specialists in digital technologies.
Sector-specific analysis and review of international labour-market studies cited in the article documenting skill-biased effects of automation and digitalization; qualitative assessment for Ukraine's mining and metallurgical sector under workforce shortage conditions.
high mixed Human-replacing technologies as a driver of labour productiv... skill demand composition (shift from low-skilled to high-skilled roles)
The framework implies threshold effects in training and capability acquisition: when the teaching horizon lies below the prerequisite depth of the target, additional instruction cannot produce successful completion of teaching; once that depth is reached, completion becomes feasible.
Model-derived threshold result described in the abstract (mathematical analysis of prerequisite depth vs. teaching horizon).
high mixed A Mathematical Theory of Understanding feasibility of successful teaching / completion of instruction
The value of information depends on whether downstream users can absorb and act on it: a signal conveys meaning only to a learner with the structural capacity to decode it (an explanation that clarifies a concept for one user may be indistinguishable from noise to another who lacks the relevant prerequisites).
Conceptual argument motivating the model; theoretical reasoning described in the paper's intro/abstract.
high mixed A Mathematical Theory of Understanding ability to interpret instructional signals / effective information transfer
Generative AI serves as an effective 'wingman' for employment lawyers, capable of replacing substantial junior associate work while requiring continued human expertise for client counseling, supervision, and final legal advice preparation.
Authors' synthesis of experimental results showing AI-produced substantive analysis plus discussion about remaining limitations (e.g., citation errors) and required human oversight; qualitative assertion about substitutability for junior associate tasks.
high mixed Robot Wingman: Using AI to Assess an Employment Termination potential replacement of junior associate tasks and required human oversight
PPS gains are task-dependent: gains are large in high-ambiguity business analysis tasks but reverse in low-ambiguity travel planning tasks.
Task-level analysis across the three domains (business, technical, travel) within the controlled study (60 tasks total); authors report differential performance patterns by domain/ambiguity.
high mixed Evaluating 5W3H Structured Prompting for Intent Alignment in... relative_performance_by_task_domain (PPS vs baselines)
AI usage has dual effects on employees: it can both enhance innovative behavior and predict disengagement, as revealed by a dual-path (SOR-based) model.
Interpretation/synthesis from the four-stage longitudinal study of 285 finance professionals using a dual-path model based on SOR theory (combining the mediation and moderation results).
high mixed Autonomous enhancement or emotional depletion? The dual-path... innovative work behavior and work disengagement behavior (dual outcomes)
We evaluate 14 LLMs under zero-shot prompting and retrieval-augmented settings and witness a clear performance gap.
Experimental evaluation reported in the paper: authors state they ran experiments on 14 different large language models, under zero-shot and retrieval-augmented configurations, and observed differing performance across models.
high mixed FinTradeBench: A Financial Reasoning Benchmark for LLMs model performance on financial reasoning benchmark (accuracy/score across models...
Artificial intelligence embedded in human decision-making can either enhance human reasoning or induce excessive cognitive dependence.
Stated as a conceptual claim in the paper's introduction/abstract; supported by the paper's conceptual framing (theoretical argument), no empirical sample or experimental data reported here.
high mixed Cognitive Amplification vs Cognitive Delegation in Human-AI ... human reasoning quality / cognitive dependence
These productivity gains are most pronounced for lower-skilled workers, producing a pattern the authors call “skill compression.”
Cross-study pattern reported in the literature review: comparative evidence across worker-skill strata in multiple empirical papers showing larger relative gains for lower-skilled/junior workers; specific underlying studies and sample sizes are not enumerated in the brief.
high mixed AI, Productivity, and Labor Markets: A Review of the Empiric... relative productivity/gains by worker skill level (leading to 'skill compression...
Study 1 quantifies confirmation bias through controlled experiments on 250 CVE vulnerability/patch pairs evaluated across four state-of-the-art models under five framing conditions for the review prompt.
Controlled experiment described in the paper: 250 CVE vulnerability/patch pairs evaluated across four state-of-the-art LLMs under five prompt framing conditions.
high mixed Measuring and Exploiting Confirmation Bias in LLM-Assisted S... confirmation bias as measured by vulnerability detection performance
These findings challenge the narrative of complete automation by AI and underscore the enduring importance of human expertise in data science.
Interpretation based on competition results where AI-only baselines underperformed relative to many participant teams and top solutions used human-AI collaboration.
high mixed AgentDS Technical Report: Benchmarking the Future of Human-A... implications for automation vs. human expertise
These findings indicate a misalignment between the perceived benefit of AI writing and an implicit, consistent effect on the semantics of human writing, with potential implications for cultural and scientific institutions.
Synthesis and interpretation of the paper's empirical results (user study, essay revision experiments, and peer-review analysis); presented as the paper's broader conclusion.
high mixed How LLMs Distort Our Written Language alignment between perceived benefits and actual semantic effects of AI writing; ...
The paper formalizes the distinction using a signal-aggregation model in which an organization maintains an anchor belief and achieves agreement through two exclusion channels: (1) report shrinkage toward the anchor and (2) a tolerance rule that discards reports deviating beyond a threshold.
Analytical formal model presented in the paper specifying an anchor belief and two exclusion mechanisms; model assumptions and mechanisms are explicit in the theoretical development. No empirical sample.
high mixed Cohesion as Concentration: Exclusion-Driven Fragility in Fin... mechanisms producing agreement (report shrinkage, tolerance-based discarding)
Organizational cohesion is observationally ambiguous: it can arise either from genuine information integration (debate and synthesis of heterogeneous inputs) or from exclusionary processes (conformity pressure, gatekeeping, intolerance of dissent).
Conceptual argument and formal definition in the paper framing; supported by the analytic distinction introduced in the paper between integration and exclusion as alternative generative mechanisms for observed agreement. No empirical sample—argument is theoretical and illustrated by model construction.
high mixed Cohesion as Concentration: Exclusion-Driven Fragility in Fin... source of observed cohesion (integration versus exclusion)
The authors identify ten evaluation practices that teams use, ranging from lightweight interpretive checks to formal organizational processes (examples: qualitative user reviews, red-team testing, A/B experiments, telemetry/log analysis, structured annotation, governance/meta-evaluation).
Thematic coding of 19 interview transcripts produced a taxonomy enumerating ten practices (paper reports the taxonomy as an outcome).
high mixed Results-Actionability Gap: Understanding How Practitioners E... taxonomy/count and description of evaluation practices
The net educational value of AI-generated feedback depends on alignment with pedagogical goals, quality evaluation, integration with human teaching, and governance to manage equity, privacy, and incentives.
Synthesis statement from the meeting report produced by 50 interdisciplinary scholars; conceptual judgment rather than empirical proof.
high mixed The Future of Feedback: How Can AI Help Transform Feedback t... net educational value (composite of learning outcomes, equity metrics, privacy c...
Convergence after exemplar exposure occurred by both tightening of estimates within a measure family and by agents switching measure families.
Agent-level tracking across stages showed two patterns following exemplar exposure: (1) reduced within-family dispersion (tighter estimates) and (2) categorical switches in measure selection by some agents, as recorded across the 150-agent sample.
high mixed Nonstandard Errors in AI Agents within-family dispersion (IQR) and measure-family switching frequency (binary/ca...
LLMs excel at extracting and generating arguments from unstructured text but are opaque and hard to evaluate or trust.
Synthesis of recent LLM literature and observed properties (generation capability vs. opacity); no empirical evaluation within this paper.
high mixed Argumentative Human-AI Decision-Making: Toward AI Agents Tha... argument extraction/generation performance and model interpretability/trustworth...
The paper is primarily theoretical and historical; empirical validation is needed to quantify the irreducible component of LLM value, and practical degrees of rule‑extractability may exist even if some capabilities remain tacit.
Stated limitations section acknowledging the theoretical nature of the work and the need for empirical follow‑up.
high mixed Why the Valuable Capabilities of LLMs Are Precisely the Unex... need for empirical validation and degree of rule‑extractability of LLM capabilit...
If an LLM's full capability were reducible to an explicit rule set, that rule set would be an expert system; because expert systems are empirically and historically weaker than LLMs, this leads to a contradiction (supporting non‑rule‑encodability).
Logical proof‑by‑contradiction presented in the paper, supported by conceptual mapping between rule sets and expert systems and qualitative historical comparisons.
high mixed Why the Valuable Capabilities of LLMs Are Precisely the Unex... logical consistency of the reducibility-to-rules claim (validity of the contradi...
Teamwork partner type moderates the effect of service empathy on collaboration proficiency (i.e., the impact of service empathy on proficiency differs by human vs AI partner).
Reported interaction/moderated-mediation analyses from the online experiment (n = 861) indicating a significant partner-type × service-empathy interaction predicting collaboration proficiency.
Employees' emotional state significantly moderates the relationship between partner type (human vs AI) and collaboration proficiency.
Moderation analyses reported from the same online experimental dataset (n = 861), testing interaction terms between partner type and measured employee emotion on collaboration proficiency; authors report a significant moderating effect.
AI adoption has an inverted U-shaped effect on employee-related corporate social responsibility (ECSR).
Panel regression with quadratic specification (AI and AI^2) showing statistically significant positive coefficient on AI and statistically significant negative coefficient on AI^2; sample of 2,575 Chinese listed firms observed 2013–2023; controls, firm and/or year fixed effects and robustness checks reported.
high mixed Attention to Whom? AI Adoption and Corporate Social Responsi... Employee-related corporate social responsibility (ECSR)
Token overhead varies from modest savings to a 451% increase while pass rates remain unchanged.
Measured token usage for agent runs with and without skills, reporting a range from modest token savings up to a 451% token increase with no corresponding change in pass rates.
high mixed SWE-Skills-Bench: Do Agent Skills Actually Help in Real-Worl... token usage/overhead (percent change) and its relation to pass rates
The research methodology combines systemic analysis, comparative assessment of international practices, and analytical generalization of organizational learning models, enabling capture of both structural trends and concrete institutional responses to technological changes.
Methodological statement from the paper describing its approach; this is a factual claim about methods used rather than an empirical finding.
high mixed EDUCATIONAL AND PROFESSIONAL STRATEGIES FOR PREPARING HUMAN ... ability to capture structural trends and institutional responses (through the ch...
Model output can be treated as evidence for studying human behavior, but there are important epistemic limits to interpreting model-generated text as direct evidence of human beliefs or social facts.
Epistemic analysis and methodological critique in the paper (discussion of limits of treating model outputs as evidence); no single empirical test cited in the provided text.
high mixed The Third Ambition: Artificial Intelligence and the Science ... validity and limits of using LLM outputs as evidence about human behavior and so...
The validity of human–AI decision-making studies hinges on participants' behaviours; effective incentives can potentially affect these behaviours.
Conclusion from the authors' thematic review and theoretical rationale linking incentive design to participant behaviour and study validity (no quantitative effect sizes provided in excerpt).
high mixed Incentive-Tuning: Understanding and Designing Incentives for... participant behaviour (engagement, effort, strategy) and resulting study validit...
The study's counterfactual analytical model links HR indicators (training intensity, absenteeism, labor productivity, turnover rates, workforce allocation) to organizational performance outcomes using regression-based simulations and predictive estimation.
Methodological claim explicitly stated: model construction from an industrial firm dataset using regression-based simulations and predictive techniques. (Specific sample size, variable operationalizations, and time frame not reported in the description.)
high mixed Artificial Intelligence and Human Resource Management: A Cou... methodological estimate of counterfactual organizational performance outcomes