The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (2608 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Skills Training Remove filter
Income inequality, measured by the Gini index, rises moderately in every scenario we examine due to the polarising effect of job losses and wage and capital income increases on the income distribution.
Calculation of Gini index across multiple simulated scenarios using the SWITCH-linked distributional analysis; reported in the report.
high negative Artificial Intelligence and income inequality in Ireland Gini index (income inequality)
The largest average losses are experienced by middle and higher income households, for whom job displacement outweighs any wage or capital income gains. Lower income households also lose, but by much less.
Distributional results from microsimulation (SWITCH) applying scenarioled job displacement, wage and capital effects across income groups; reported in the report.
high negative Artificial Intelligence and income inequality in Ireland change in household disposable income by income group
When these effects are combined, we find an average decline in household disposable income as a result of AI adoption.
Combined scenario simulations incorporating job displacement, wage effects and capital income effects linked to the Irish tax-benefit system using SWITCH; result reported in the report's main findings.
high negative Artificial Intelligence and income inequality in Ireland household disposable income (average change)
These wage gains are not large enough to counterbalance the average fall in income due to job displacement.
Combined simulation results (displacement + wage effects) using scenario assumptions and microsimulation (SWITCH), reported in the report's distributional analysis.
high negative Artificial Intelligence and income inequality in Ireland net effect on household income (wages versus displacement losses)
Those most likely to experience this disruption are found in higher income households, where the share of workers transitioning into unemployment is substantially larger than in lower income families.
Microsimulation (SWITCH) linking simulated job displacement scenarios to household income groups; results reported in the report.
high negative Artificial Intelligence and income inequality in Ireland share of workers transitioning into unemployment by household income
In our central scenario — drawn from credible international estimates — around 7 per cent of current jobs could be displaced in the short–medium run.
Scenario simulation based on international estimates of AI exposure/adoption; central scenario reported in the report (linked to SWITCH microsimulation for distributional analysis).
high negative Artificial Intelligence and income inequality in Ireland share of jobs displaced
AI tends to place higher earning and highly educated workers at greater risk of disruption, because the occupations most exposed to AI are predominantly in these groups.
Synthesis of international research on occupational exposure to AI and the report's analysis linking exposure to worker characteristics (education and earnings); presented as descriptive finding in the report.
high negative Artificial Intelligence and income inequality in Ireland risk of job disruption / occupational exposure to AI
Result 2: When managers are short-termist or worker skill has external value, the decision-maker's optimal policy can produce the augmentation trap, leaving the worker worse off than if AI had never been adopted.
Analytical result from the dynamic model comparing planner/objective variations (short-termist manager or externalities) and showing an outcome labeled the 'augmentation trap'.
high negative The Augmentation Trap: AI Productivity and the Cost of Cogni... worker welfare/productivity relative to non-adoption
Result 1: Even a decision-maker who fully anticipates skill erosion rationally adopts AI when front-loaded productivity gains outweigh long-run skill costs, producing steady-state loss: the worker ends up less productive than before adoption.
Analytical result from the dynamic model showing optimal adoption choice can lead to a steady-state where worker productivity is lower than pre-adoption (model-based comparative statics).
high negative The Augmentation Trap: AI Productivity and the Cost of Cogni... steady-state worker productivity (relative to pre-adoption)
Experimental evidence shows that sustained use of AI tools can erode the expertise on which productivity gains depend (deskilling).
Statement in paper referencing experimental studies (no specific study, method, or sample size reported in the excerpt).
high negative The Augmentation Trap: AI Productivity and the Cost of Cogni... worker expertise / skill level
These dynamics risk trapping workers in a 'low-skill trap'.
Synthesis of observed labour-market polarisation, persistent low-skill segment, and limited reskilling coverage from secondary sources (2020–2024); presented as a likely risk/consequence.
high negative Artificial Intelligence and labour market polarisation in In... entrenchment of low-skill employment and reduced upward mobility
Limited reskilling coverage constrains workers' ability to adapt to AI-driven changes.
Paper reviews official reports and secondary data (2020–2024) indicating low coverage/uptake of reskilling programs in India and links this to limited adaptation capacity.
high negative Artificial Intelligence and labour market polarisation in In... coverage/effectiveness of reskilling and workers' adaptive capacity
AI-driven change is intensifying wage disparities.
Paper links observed occupational shifts in secondary data (2020–2024) with widening wage gaps between high- and lower-skilled groups.
high negative Artificial Intelligence and labour market polarisation in In... wage disparities between skill groups
Routine middle-skilled roles are declining.
Secondary data and official reports from 2020–2024 documenting reductions in middle-skill occupations, interpreted through SBTC/Human Capital frameworks.
high negative Artificial Intelligence and labour market polarisation in In... decline in middle-skill jobs / job displacement in routine roles
There is a 'capability-demand inversion' where skills most demanded in AI-exposed jobs are those LLMs perform least well at in our benchmark.
Cross-referencing SAFI performance with Anthropic Economic Index demand data (reported in paper); described as an observed inversion pattern.
high negative The AI Skills Shift: Mapping Skill Obsolescence, Emergence, ... relationship between skill demand in AI-exposed jobs and SAFI performance
We posit that persistence is reduced because AI conditions people to expect immediate answers, denying them the experience of working through challenges on their own.
Authors' proposed psychological mechanism / explanation inferred from observed behavior; presented as a hypothesis rather than directly proven causal mediator.
high negative AI Assistance Reduces Persistence and Hurts Independent Perf... mechanistic explanation for reduced persistence (expectation of immediate answer...
These negative effects (reduced persistence and impaired unassisted performance) emerge after only brief interactions with AI (approximately 10 minutes).
Experimental manipulation / exposure in RCTs where participants interacted with AI for about 10 minutes and subsequent outcomes were measured.
high negative AI Assistance Reduces Persistence and Hurts Independent Perf... onset/time to observable effect (persistence and unassisted performance after ~1...
People are more likely to give up after interacting with AI (increased likelihood of quitting tasks unassisted).
Randomized controlled trials (N = 1,222) measuring rates of task abandonment/giving-up after AI interaction vs. control.
high negative AI Assistance Reduces Persistence and Hurts Independent Perf... likelihood of giving up / task abandonment
AI assistance impairs unassisted performance: although AI improves short-term performance, people perform significantly worse without AI after interacting with it.
Randomized controlled trials (N = 1,222) comparing performance with and without AI assistance across tasks; causal inference from randomized assignment.
high negative AI Assistance Reduces Persistence and Hurts Independent Perf... unassisted task performance (accuracy/quality when working without AI after prio...
Through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence that AI assistance reduces persistence.
Randomized controlled trials (RCTs) on human-AI interactions with total sample size N = 1,222; persistence measured after AI interaction across tasks.
high negative AI Assistance Reduces Persistence and Hurts Independent Perf... persistence (willingness to continue working on tasks without AI)
Occupations are not eradicated instantaneously, but gradually encroached upon via atomic actions.
Conceptual argument presented by the authors as part of their theoretical framing (Tech-Risk Dual-Factor Model); no empirical count reported for this specific claim.
high negative Bounded by Risk, Not Capability: Quantifying AI Occupational... process of occupational change / displacement
Existing task-based evaluations predominantly measure theoretical "exposure" to AI capabilities, ignoring critical frictions of real-world commercial adoption: liability, compliance, and physical safety.
Authoritative statement in paper contrasting prior task-based exposure evaluations with the paper's focus on business/institutional frictions (liability, compliance, physical safety). No numeric sample; literature critique based on conceptual analysis.
high negative Bounded by Risk, Not Capability: Quantifying AI Occupational... theoretical automation exposure measurement practices
Up to 25% of routine administrative tasks face high automation risk.
Quantitative survey of 150 leading Nigerian firms across finance, tech, and manufacturing reporting the share of tasks at high automation risk.
high negative Human Capital and the AI-Powered Future of Work: (Training, ... share of routine administrative tasks at high automation risk
There is a significant deficit in high-demand technical competencies such as data engineering, machine learning maintenance, and AI ethics within the Nigerian workforce.
Findings reported from the quantitative survey of 150 leading Nigerian firms (finance, tech, manufacturing) supplemented by qualitative workforce interviews and policy analysis.
high negative Human Capital and the AI-Powered Future of Work: (Training, ... availability/deficit of technical competencies (data engineering, ML maintenance...
Practitioners identified specific functional deficiencies in AI: inability to maintain sustained partnerships.
Theme from semi-structured interviews with 10 practitioners; cited as an example of the functional gap.
high negative Bridging the Socio-Emotional Gap: The Functional Dimension o... AI capability to maintain sustained collaborative partnerships
Practitioners identified specific functional deficiencies in AI: inability to adapt contextually.
Theme from semi-structured interviews with 10 practitioners; cited as an example of the functional gap.
high negative Bridging the Socio-Emotional Gap: The Functional Dimension o... AI capability for contextual adaptation in collaborative work
Practitioners identified specific functional deficiencies in AI: inability to negotiate responsibilities.
Theme from semi-structured interviews with 10 practitioners; cited as an example of the functional gap.
high negative Bridging the Socio-Emotional Gap: The Functional Dimension o... AI capability to negotiate responsibilities in teamwork
Practitioners currently view AI models as intellectual teammates rather than social partners and expect fewer SEI attributes from them than from human teammates.
Qualitative findings from semi-structured interviews with 10 software practitioners reported in the study.
high negative Bridging the Socio-Emotional Gap: The Functional Dimension o... practitioners' expectations of SEI attributes in AI versus human teammates
Current AI systems lack SEI capabilities that humans bring to teamwork, creating a potential gap in collaborative dynamics.
Framed as background/context in the paper; asserted rather than empirically tested in this study.
high negative Bridging the Socio-Emotional Gap: The Functional Dimension o... presence of SEI capabilities in AI systems (vs. humans)
Unbalanced or poorly governed adoption of Big Data and AI contributes to increased systemic risk, cybersecurity vulnerability, regulatory fragmentation and third-party dependence on BigTech platforms.
Argument based on qualitative literature review and synthesis of international empirical studies and comparative sector analysis; no single-sample empirical study in this paper.
high negative Implications of Big Data Technologies for the Resilience of ... systemic risk; cybersecurity vulnerability; regulatory fragmentation; third-part...
Extreme automation (high AI intensity) causes employment decline.
Part of the U-shaped relationship reported by the paper's empirical results; described qualitatively in the abstract/summary.
high negative Impact Of Artificial Intelligence (AI) On Employment employment decline
Task orchestration is the most under-researched dimension among the five workplace-design components.
Finding from the PRISMA-guided systematic review of 120 papers, which mapped coverage across the five dimensions and identified task orchestration as having the least research attention.
high negative From Automation to Augmentation: A Framework for Designing H... volume/coverage of research on task orchestration
Decision authority allocation emerges as the binding constraint for Society 5.0 transitions.
Result synthesized from the systematic review and theoretical analysis mapping the five workplace-design dimensions; stated as the binding constraint in the paper's findings.
high negative From Automation to Augmentation: A Framework for Designing H... constraint on transitions to human-centric (Society 5.0) technology integration
Under low emotional intelligence, the model predicts higher risks of over-reliance on AI, emotionally detached communication, and weaker delegation quality.
Theoretical predictions derived from the EI-moderated human–AI model presented in the paper.
high negative LEADER EMOTIONAL INTELLIGENCE IN THE GENERATIVE AI ERA: “HUM... delegation quality (and over-reliance / communication quality)
Kerangka hukum ketenagakerjaan Indonesia saat ini bersifat reaktif, dengan fokus pada kompensasi pasca-PHK yang belum mampu menjawab dampak jangka panjang disrupsi AI.
Analisis normatif terhadap peraturan perundang-undangan dan temuan dari literatur yang ditinjau; kesimpulan yang dilaporkan oleh penulis penelitian.
high negative Reformasi Hukum Ketenagakerjaan di Era Artificial Intelligen... orientasi kebijakan hukum (reaktif vs proaktif) dan kecukupan penanganan dampak ...
Belum terdapat pengaturan eksplisit mengenai kewajiban pelatihan ulang (retraining) maupun mekanisme distribusi manfaat teknologi secara adil dalam kerangka hukum ketenagakerjaan Indonesia saat ini.
Temuan dari analisis peraturan perundang-undangan nasional (UU Cipta Kerja dan peraturan turunannya) dan literatur yang dikaji dalam penelitian normatif.
high negative Reformasi Hukum Ketenagakerjaan di Era Artificial Intelligen... kekosongan regulasi terkait kewajiban pelatihan ulang dan mekanisme distribusi m...
Fenomena adopsi AI menimbulkan tantangan hukum terkait perlindungan hak pekerja, keadilan sosial, dan keberlanjutan sistem ketenagakerjaan.
Analisis normatif terhadap konsekuensi sosial-ekonomi AI yang disintesis dari literatur nasional (SINTA) dan internasional; pendekatan konseptual dan komparatif dijelaskan dalam metode.
high negative Reformasi Hukum Ketenagakerjaan di Era Artificial Intelligen... kebutuhan perlindungan hukum untuk hak pekerja dan keadilan sosial
Perkembangan pesat Artificial Intelligence (AI) telah membawa perubahan mendasar dalam struktur pasar tenaga kerja di Indonesia dengan meningkatnya risiko penggantian pekerjaan manusia oleh teknologi otomatisasi.
Pernyataan latar belakang yang didukung oleh tinjauan literatur pada jurnal nasional terindeks SINTA dan jurnal internasional bereputasi (metode: penelitian hukum normatif dengan pendekatan perundang-undangan, konseptual, dan komparatif).
high negative Reformasi Hukum Ketenagakerjaan di Era Artificial Intelligen... risiko penggantian pekerjaan oleh automasi (job displacement risk)
The common claim that generative AI simply amplifies the Dunning–Kruger effect is too coarse to capture the available evidence.
Paper's synthesis of heterogenous empirical findings from human–AI interaction, learning research, and model evaluation used to critique the uniform-amplification interpretation; no single empirical countertest reported.
high negative Beyond the Steeper Curve: AI-Mediated Metacognitive Decoupli... validity of the 'amplified Dunning–Kruger' interpretation
LLM use degrades metacognitive accuracy and flattens the classic competence–confidence gradient across skill groups (i.e., reduces calibration and narrows differences in self-assessed confidence by skill level).
Synthesis of studies from human–AI interaction and learning research reported in the paper that document worsened calibration and a reduction in the competence–confidence gradient when users rely on LLM outputs; the paper does not report a single combined sample size.
high negative Beyond the Steeper Curve: AI-Mediated Metacognitive Decoupli... metacognitive accuracy / calibration and competence–confidence gradient
Prominent studies predict substantial job displacement due to automation.
Paper asserts this as background, referencing the existence of prominent studies in the literature (no specific citations or sample sizes provided in the abstract).
high negative AI Civilization and the Transformation of Work job losses / displacement
The financial planning and investment management profession is undergoing a radical transformation driven by Generative AI (GenAI) and Agentic AI, creating urgent workforce displacement challenges that require coordinated government policy intervention alongside educational reform.
Author assertion in the paper's introduction/abstract; framing argument based on the paper's synthesized analysis (no empirical sample, no reported statistical test).
high negative STRENGTHENING FINANCIAL WORKFORCE COMPETITIVENESS: A CURRICU... rate of workforce displacement in the financial planning and investment manageme...
AI-driven job displacement disproportionately affects low-skilled workers.
Reported empirical result from the paper's PLS-SEM analysis on the 351-respondent dataset.
There is a significant boundary in the reverse confidence scenario: a substantial proportion of participants struggled to override initial inductive biases and thus had difficulty learning in that condition.
Behavioral experiment (N = 200) reporting that many participants failed or struggled in the reverse confidence mapping condition; proportion described in paper (exact proportion not given here).
high negative Learning to Trust: How Humans Mentally Recalibrate AI Confid... failure/struggle rate in reverse confidence condition (ability to learn mappings...
Currently, the region remains reactive as a 'recipient' rather than a 'creator' or an effective partner in the AI ecosystem.
Characterization reported by the authors based on their regional research and field study (qualitative findings from leaders across public/private sectors).
high negative Charting AI Governance Future in the Arab Region: A Policy R... degree of domestic AI creation/innovation versus reception/adoption
This gap hinders the ability of many governments in the region to push their countries toward joining the ranks of those benefiting from the AI revolution—both in developing the public sector and supporting economic growth and social development.
Authors' analysis and interpretation based on the regional research/field study described in the report.
high negative Charting AI Governance Future in the Arab Region: A Policy R... governments' ability to benefit from AI (public sector development; economic and...
The Arab region’s capacity for Artificial Intelligence (AI) governance remains limited relative to the accelerating pace of global AI developments and associated challenges.
Stated conclusion in the executive report based on a regional field study (authors' analysis of interviews/surveys and research across the region).
As artificial intelligence assumes cognitive labor, no existing quantitative framework predicts when human capability loss becomes catastrophic.
Introductory/background claim asserted by authors motivating the study (literature gap claim).
high negative The enrichment paradox: critical capability thresholds and i... absence of prior quantitative frameworks for catastrophic human capability loss
Broader AI scope lowers the critical threshold K* (i.e., more general AI reduces the K* value at which capability collapse occurs).
Model sensitivity analysis / simulations showing K* varies with assumed scope of AI (reported in model calibration discussion).
high negative The enrichment paradox: critical capability thresholds and i... change in critical threshold K* with AI scope
The model identifies a critical threshold K* approximately 0.85 (scope-dependent; broader AI scope lowers K*) beyond which capability collapses abruptly — the 'enrichment paradox.'
Model analysis and simulations calibrated across domains (paper reports computed threshold K* ≈ 0.85 and notes dependence on AI scope).
high negative The enrichment paradox: critical capability thresholds and i... critical delegation/capability threshold (K*) at which human capability collapse...