The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2480 claims)

Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 373 105 59 439 984
Governance & Regulation 366 172 115 55 718
Research Productivity 237 95 34 294 664
Organizational Efficiency 364 82 62 34 545
Technology Adoption Rate 293 118 66 30 511
Firm Productivity 274 33 68 10 390
AI Safety & Ethics 117 178 44 24 365
Output Quality 231 61 23 25 340
Market Structure 107 123 85 14 334
Decision Quality 158 68 33 17 279
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 88 31 38 9 166
Firm Revenue 96 34 22 152
Innovation Output 105 12 21 11 150
Consumer Welfare 68 29 35 7 139
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 71 10 29 6 116
Worker Satisfaction 46 38 12 9 105
Error Rate 42 47 6 95
Training Effectiveness 55 12 11 16 94
Task Completion Time 76 5 4 2 87
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 16 9 5 48
Job Displacement 5 29 12 46
Social Protection 19 8 6 1 34
Developer Productivity 27 2 3 1 33
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 8 4 9 21
Clear
Labor Markets Remove filter
Heterogeneity across universities implies that targeting high-performing institutions and diffusing their practices could be more effective than uniform expansion of AI training.
Observed variation in employment effectiveness, placement outcomes, and wages across the 191 universities; policy implication drawn from comparative performance patterns.
medium mixed Employment og Graduates of Educational Programs in the Field... Relative effectiveness of university programs (employment rates, wage outcomes) ...
Labor market institutions (unions, collective bargaining), education and training systems, social safety nets, and regulations substantially mediate distributional and aggregate outcomes of AI adoption.
Comparative institutional analysis and equilibrium models linking institutional settings to wage-setting and reallocation dynamics, supported by empirical cross-jurisdiction comparisons where available.
medium mixed Intelligence and Labor Market Transformation: A Critical Ana... distributional outcomes (inequality), unemployment, and wage-setting dynamics
Developing economies face different trade-offs from AI adoption than advanced economies, due to different occupational structures and complementarities.
Comparative analyses and sectoral studies drawing on cross-country microdata and institutional comparisons; theoretical models highlighting differences in task composition and absorptive capacity.
medium mixed Intelligence and Labor Market Transformation: A Critical Ana... country-level employment and wage impacts, particularly by sector and occupation...
Occupational reallocation occurs: declines in some routine occupations alongside growth in AI-complementary roles (e.g., AI maintenance, oversight, and creative tasks).
Administrative and household employment data analyzed with occupational breakdowns, supplemented by task-mapping methods and panel/event-study approaches documenting shifting occupational shares over time.
medium mixed Intelligence and Labor Market Transformation: A Critical Ana... occupational employment shares and job creation in AI-complementary roles
Lower-skill roles experience mixed outcomes: some see adverse effects from automation while others benefit where AI is complementary to their tasks.
Microdata analyses and case studies showing heterogeneous effects by task complementarity; task-based exposure measures that differentiate which low-skill tasks are automatable versus augmentable.
medium mixed Intelligence and Labor Market Transformation: A Critical Ana... employment and wages of lower-skill workers
AI contributes to wage polarization: earnings grow at the top of the distribution and stagnate or fall for middle occupations.
Wage distribution decompositions and panel regression studies that examine percentile-level wage changes, combined with task-based exposure measures linking AI adoption to differential impacts across the wage distribution.
medium mixed Intelligence and Labor Market Transformation: A Critical Ana... wage changes across distribution (top percentiles vs. middle percentiles)
The employment impact of automation depends crucially on labour-market structure (formal vs informal), availability of alternative employment, and social protections.
Theoretical framing supported by secondary literature comparing institutional contexts and their mediating effects on automation outcomes; no primary causal estimates in this paper.
medium mixed Who Loses to Automation? AI-Driven Labour Displacement and t... employment impact of automation (unemployment, underemployment, reallocation rat...
Standard policy responses focused on retraining and active labor-market programs are necessary but insufficient to fully offset structural job losses where K_T substitutes broadly for tasks.
Model simulations and policy experiments in the calibrated dynamic model comparing scenarios with aggressive retraining versus structural fiscal/interventionist reforms; discussion of empirical limits from case studies and historical reskilling outcomes.
medium mixed The Macroeconomic Transition of Technological Capital in the... employment recovery and distributional outcomes under alternative policy scenari...
Routine automation of routine drafting tasks by GLAI may reduce demand for junior drafting labor while increasing demand for skilled reviewers, auditors, and legal technologists.
Labor-market reasoning based on task automation literature and illustrative vignettes; no labor-force survey or longitudinal employment data provided.
medium mixed (negative for junior drafting roles, positive for reviewer/technologist roles) Why Avoid Generative Legal AI Systems? Hallucination, Overre... employment demand by role (junior drafters vs. skilled reviewers/auditors/techno...
The economic inevitability of technological transformation (in agentic finance) and the critical urgency of proactive intervention.
Author claim synthesizing the paper's argument and modeling results (normative conclusion based on earlier analysis and assertions, not a validated empirical finding).
medium negative STRENGTHENING FINANCIAL WORKFORCE COMPETITIVENESS: A CURRICU... likelihood of technology-driven structural change in the finance workforce
Surveillance intensity is associated with hyper-vigilance (reported effect = -4.213).
One of the six propositions from the paper's trilevel framework; the abstract reports an effect value of '-4.213' associated with surveillance intensity → hyper-vigilance.
medium negative Algorithmic Control and Psychological Risk in Digitally Mana... hyper-vigilance (psychological arousal/state)
Platform workers receive 36.3% more third-party ratings than traditional workers.
Quantitative synthesis/summary reported in the paper (no primary sample size in abstract); likely aggregated from included studies.
medium negative Algorithmic Control and Psychological Risk in Digitally Mana... number of third-party ratings received
Platform workers experience 59.6% higher digital speed determination than traditional workers.
Quantitative synthesis/summary reported in the paper (no primary sample size given in the abstract); presumably aggregated from included studies comparing platform and traditional workers.
medium negative Algorithmic Control and Psychological Risk in Digitally Mana... digital speed determination
The pre-existing AI community dissolved as the tools went mainstream, and the new vocabulary was absorbed into existing careers rather than binding a new occupation.
Interpretation of resume-data patterns: observed dispersion of previously coherent AI practitioners and spread of AI-related vocabulary into other occupational records rather than consolidation into a new occupational cluster.
medium negative NLP Occupational Emergence Analysis: How Occupations Form an... population cohesion / absorption into existing careers (dissolution of standalon...
Most existing candidate matching systems act as keyword filters, failing to handle skill synonyms and nonlinear careers, resulting in missed candidates and opaque match scores.
Paper's introductory assertion about limitations of most current systems. The excerpt does not cite empirical studies, statistics, or systematic reviews to substantiate this claim.
medium negative JobMatchAI An Intelligent Job Matching Platform Using Knowle... limitations of extant systems: keyword-filter behavior, failure on skill synonym...
Counterfactual simulations show that modest salary increases have a smaller effect on predicted attrition than eliminating overtime (in this dataset and model).
Comparative counterfactual experiments run on the calibrated logistic model: simulations altering salary vs. altering overtime feature; reported that overtime elimination outperforms modest pay increases in retained headcount and probability reductions (exact salary-change amounts and comparative numbers not given in the summary).
medium negative Explainable AI for Employee Retention in Green Human Resourc... change in predicted attrition probability and aggregated retained headcount unde...
In the dataset used, eliminating overtime could potentially retain about 31 employees — a larger effect than modest salary increases.
Aggregated counterfactual simulation on the IBM HR Analytics dataset: after setting overtime to zero for applicable records, the model-predicted net retained headcount ≈ 31; compared to simulations of modest salary increases which yielded smaller retained headcount (exact salary-change magnitude and headcount numbers not provided).
medium negative Explainable AI for Employee Retention in Green Human Resourc... predicted retained headcount (number of employees whose attrition probability fa...
Eliminating overtime could lower predicted attrition probability by 17.35% for affected employees (per the model's counterfactual simulation).
Counterfactual policy simulation using the calibrated logistic model on the IBM HR Analytics dataset: set overtime feature to zero for affected employees and compute change in each employee's calibrated attrition probability; reported average reduction = 17.35%.
medium negative Explainable AI for Employee Retention in Green Human Resourc... change in calibrated predicted attrition probability (percentage point reduction...
AI adoption is skill-biased and spatially uneven, increasing risks of labor-market exclusion among low-educated, middle-aged workers in high-AI regions.
Inference from observed negative associations between AI-rich regions and employment intention for low-educated respondents in the survey of 889; supported by region-level AI adoption proxies used in regressions.
medium negative Analysis of the Impact of Artificial Intelligence on Middle-... self-reported willingness to continue working before retirement (employment inte...
Regional heterogeneity: eastern and northern areas with greater AI penetration intensify displacement pressure on low-skilled, pre-retirement workers.
Subsample/interaction results in the regression analysis separating regions (Beijing, Guangzhou, Lanzhou and broader eastern/northern regional classification) and linking regional AI penetration proxies to employment intention outcomes among low-skilled workers.
medium negative Analysis of the Impact of Artificial Intelligence on Middle-... self-reported willingness to continue working before retirement (employment inte...
Low-educated workers—especially in eastern and northern regions with greater AI adoption—experience increased displacement pressure and lower employment intent.
Interaction/heterogeneity analysis from multivariate regressions on the sample of 889 respondents, using region-level AI adoption intensity (proxied by region) to identify differential associations by education level; stronger negative associations for low-educated respondents in eastern and northern areas.
medium negative Analysis of the Impact of Artificial Intelligence on Middle-... self-reported willingness to continue working before retirement (employment inte...
Higher household economic pressure is negatively associated with willingness to remain employed pre-retirement.
Regression controls included household economic pressure measured in the cross-sectional survey (n=889); coefficient on economic pressure indicated a negative association with employment intention.
medium negative Analysis of the Impact of Artificial Intelligence on Middle-... self-reported willingness to continue working before retirement (employment inte...
Environmental and informational externalities from AI (energy use, privacy harms, bias) justify regulatory and Pigouvian-style interventions to correct market failures.
Conceptual and policy literature reviewed, combined with empirical observations about environmental impacts and privacy/bias incidents reported in prior studies; the paper does not provide new causal estimates of externality magnitudes.
medium negative The Evolution and Societal Impact of Artificial Intelligence... externality magnitudes (environmental costs, privacy/bias harms) and welfare eff...
AI may alter firms' competitive dynamics by amplifying scale advantages and platform effects, making antitrust, data portability, and competition policy relevant to preserve contestability and innovation.
Synthesis of industrial organization theory and empirical observations of platform markets and data-driven firms cited in the literature review; no primary empirical study included in this paper.
medium negative The Evolution and Societal Impact of Artificial Intelligence... market concentration, competition levels, and innovation dynamics
The under‑use of external text sources in the reviewed literature may be due to privacy, legal/regulatory uncertainty, or integration costs.
Authors' interpretation linking observed low coverage of external text sources (social media, news, reviews) in the 109 articles to plausible barriers (privacy/regulation/integration); no direct empirical test in the review.
medium negative Natural language processing in bank marketing: a systematic ... use of external text sources in marketing research and barriers to their use
Widespread deployment of similar models could create correlated failures or fraud vectors, implying systemic risk that may warrant macroprudential attention.
Analytic caution based on model homogeneity and case/literature discussion; speculative systemic risk concern rather than empirically demonstrated.
medium negative Explore the Impact of Generative AI on Finance and Taxation systemic correlated failure risk, incidence of correlated fraud events
There is regulatory uncertainty around AI-generated filings and responsibility/liability for automated outputs.
Analysis and literature review discuss unclear regulatory positions and legal risks noted in case organizations' deployment considerations.
medium negative Explore the Impact of Generative AI on Finance and Taxation regulatory/compliance risk exposure for AI-generated filings
Integration complexity with legacy ERP/financial systems and sharing-center processes is a significant implementation challenge.
Case study narratives describe integration work and friction points; analytic framing highlights ERP compatibility issues.
medium negative Explore the Impact of Generative AI on Finance and Taxation integration effort/time/cost, compatibility with ERP systems
Model hallucinations, lack of explainability, and limited audit trails limit safe adoption.
Paper cites literature and case observations about model reliability and explainability issues; examples and discussion are qualitative.
medium negative Explore the Impact of Generative AI on Finance and Taxation model reliability (hallucination incidence), explainability/auditability metrics
Data privacy, confidentiality, and cross-border data transfer concerns are important barriers to deployment.
Challenges enumerated from case studies and literature; specific organizational concerns cited in cases (Xiaomi, Deloitte) and in regulatory discussion.
medium negative Explore the Impact of Generative AI on Finance and Taxation deployment constraints related to data privacy (e.g., blocked data flows, need f...
Explainability, auditability, or data-localization requirements could favor larger vendors with compliance capacity, increasing market concentration and affecting competition among AI suppliers.
Market-structure argument grounded in regulatory-compliance burden analysis and comparative examples; not supported by empirical market data in the study.
medium negative ARTIFICIAL INTELLIGENCE AND ADMINISTRATIVE GOVERNANCE: A CRI... market concentration and competition among AI vendors (supplier market structure...
Legal uncertainty and strict procedural requirements increase compliance costs and regulatory risk, which can slow AI adoption by firms and public agencies.
Theoretical economic implications drawn from legal analysis and comparative observations; no empirical measurement of costs or adoption rates in the study.
medium negative ARTIFICIAL INTELLIGENCE AND ADMINISTRATIVE GOVERNANCE: A CRI... AI adoption rate and investment risk (speed and likelihood of procurement/invest...
AI can restrict or reshape human administrative discretion in legally sensitive ways.
Doctrinal analysis of statutory specificity and formal procedural requirements in civil-law contexts, illustrated with Vietnam as the exemplar case; comparative observations.
medium negative ARTIFICIAL INTELLIGENCE AND ADMINISTRATIVE GOVERNANCE: A CRI... scope of administrative discretion (degree of human decision-making latitude)
Capabilities and data advantages for certain vendors could lead to market concentration and platform dominance in AI-driven educational feedback.
Expert concern synthesized from the workshop of 50 scholars about market dynamics; theoretical warning without empirical market-structure analysis in the report.
medium negative The Future of Feedback: How Can AI Help Transform Feedback t... market concentration measures (market share, Herfindahl index); entry barriers; ...
Differential access to high-quality AI feedback systems and bias in training data can exacerbate educational inequalities and harm marginalized groups.
Expert consensus and thematic analysis from the 50-scholar workshop, raising equity and bias risks; no empirical subgroup effectiveness estimates included.
medium negative The Future of Feedback: How Can AI Help Transform Feedback t... access disparities; differential effectiveness by subgroup; measures of algorith...
Learners may over-rely on AI feedback or game systems to obtain desirable responses, reducing effortful learning.
Workshop participant concerns synthesized qualitatively; cited as risk and an open empirical question—no experimental data provided.
medium negative The Future of Feedback: How Can AI Help Transform Feedback t... learner reliance on AI (usage patterns); changes in effortful learning behaviors...
Reliance on preference signals risks learning spurious proxies and produces unstable behavior under distribution shift.
Theoretical argument supported by examples of spurious proxies in ML and by observations in RLHF-trained models; the paper cites literature showing proxy behavior but does not present a unified empirical quantification specific to RLHF across many tasks.
medium negative Via Negativa for AI Alignment: Why Negative Constraints Are ... frequency of spurious-proxy-driven failures and degradation in behavior under di...
Positive preference signals are continuous, context-dependent, and entangled with surface correlates (e.g., agreement with the user), which causes models trained on them to pick up spurious proxies and exhibit sycophancy and brittleness.
Conceptual/theoretical argument in the paper describing structural properties of preference spaces, supported by cited observations of sycophantic behavior in models trained with preference-based objectives. No single definitive empirical quantification is provided within the paper; supporting examples are drawn from recent literature.
medium negative Via Negativa for AI Alignment: Why Negative Constraints Are ... incidence of sycophantic behavior and brittleness (e.g., tendency to agree with ...
There is a risk of manipulation and misinformation if argument mining/synthesis is unregulated or misaligned with social incentives, creating externalities that may justify public intervention.
Conceptual risk assessment combining known misinformation dynamics and AI capabilities; no empirical incident data provided.
medium negative Argumentative Human-AI Decision-Making: Toward AI Agents Tha... incidence of manipulation/misinformation attributable to argument-mining/synthes...
Increased error risk and weaker explainability from GLAI will raise malpractice and liability exposure for firms and lawyers, driving up insurance and compliance costs.
Legal-risk analysis and economic reasoning connecting explainability/liability to insurance costs; no empirical cost studies presented.
medium negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... malpractice/liability exposure levels and associated insurance/compliance costs
The combination of hallucination and professional overreliance strains existing regulatory goals (e.g., explainability, human oversight) within European AI governance frameworks.
Legal and regulatory analysis mapping technical and behavioral risks onto European AI governance goals; references to statutory/regulatory texts and policy debates. Qualitative argumentation rather than empirical test.
medium negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... compatibility between GLAI deployment dynamics and regulatory obligations (e.g.,...
Fabricated or opaque intermediate data and reasoning in GLAI weaken explainability, making it difficult to provide meaningful explanations about how outputs were produced.
Conceptual analysis of token-prediction architectures, literature on explainability limits of LLMs, and legal/regulatory analysis referencing explainability requirements. No empirical measurement.
medium negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... quality/meaningfulness of explanations about model outputs (explainability)
Hallucinated content produced by GLAI is often linguistically fluent and persuasive, increasing the risk that legal professionals will accept it without verification.
Literature synthesis on model fluency and behavioral literature on trust in coherent authoritative outputs, plus illustrative vignettes. No original experimental data or sample size.
medium negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... rate of professional acceptance or uncritical reliance on fluent but incorrect o...
This architectural mismatch (token-prediction vs. formal legal reasoning) contributes to confident but factually incorrect outputs (hallucinations) in GLAI.
Technical/conceptual analysis plus synthesis of existing literature on hallucinations in generative models; illustrative examples and vignettes provided. No primary empirical measurement in the paper.
medium negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... incidence and nature of hallucinated (factually incorrect) outputs produced by G...
Top-performing community submissions (including baselines and competition entries) still leave a performance gap relative to elite human play on battling tasks.
Paper reports comparative evaluation results showing win-rate and other metrics for heuristic, RL, LLM baselines and community submissions versus human (elite) benchmarks; analysis highlights a remaining gap.
medium negative The PokeAgent Challenge: Competitive and Long-Context Learni... performance gap measured primarily by win-rate (Battling) and strategic robustne...
Misalignment or poor meta-control could produce persistent unsafe behaviors in autonomous learners; governance and oversight mechanisms will be crucial.
Risk analysis based on conceptual failure modes for meta-control; no empirical incidents reported in the paper.
medium negative Why AI systems don't learn and what to do about it: Lessons ... frequency and severity of unsafe behaviors; successful governance interventions
Current models transfer poorly across domains, are brittle in nonstationary environments, and are inefficient in physical/embodied tasks.
Synthesis of known challenges from prior literature and practical experience; paper cites these as motivating observations rather than reporting new data.
medium negative Why AI systems don't learn and what to do about it: Lessons ... cross-domain generalization; robustness under nonstationarity; sample efficiency...
Current models have limited meta-control and do not autonomously decide when to explore, imitate, consult prior knowledge, or consolidate.
Conceptual critique based on typical ML training pipelines and limited on-line decision-making modules; no empirical tests in paper.
medium negative Why AI systems don't learn and what to do about it: Lessons ... autonomy in meta-decisions (e.g., fraction of exploration/imitative acts chosen ...
There is weak integration between passive observation (supervised/representation learning) and active experimentation (reinforcement/exploratory learning) in current systems.
Observation of methodological separation in current literature and systems; conceptual discussion in the paper.
medium negative Why AI systems don't learn and what to do about it: Lessons ... performance on mixed observation-action tasks; ability to combine passive and ac...
Current AI models lack the architectures and control mechanisms required for sustained, autonomous learning in dynamic real-world settings.
Conceptual/theoretical analysis presented in the paper; synthesis of limitations observed in existing literature and practices (no new empirical data provided).
medium negative Why AI systems don't learn and what to do about it: Lessons ... ability to sustain autonomous learning in dynamic real-world environments