The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (2608 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Skills Training Remove filter
A small number of AI corporations have unprecedented power.
Introductory chapter highlights the theme of concentrated corporate power in AI; asserted as an observational claim in the report's framing rather than derived from a presented empirical sample in the introduction.
high negative Introduction: Artificial Intelligence, Politics, and Politic... concentration of corporate power in the AI industry (market control, platform in...
WIOA is not well-equipped to support large-scale, cross-industry labor transitions.
Low observed incidence of cross-industry occupational transitions and limited shifts into less automation-exposed occupations in the WIOA data (2017-2023) lead authors to conclude the program is poorly suited for large-scale cross-industry reallocation.
high negative Did US Worker Retraining Reduce Participant Automation Expos... cross-industry occupational transitions / shifts in RTI after program participat...
A substantial portion of WIOA participants simply return to their prior field after program participation.
Descriptive and outcome analyses on the WIOA participation records (2017-2023) showing many participants re-enter the same occupation/industry rather than transitioning to different occupations.
high negative Did US Worker Retraining Reduce Participant Automation Expos... occupational/industry re-entry (return to prior field) following program partici...
WIOA rarely shifts workers into less automation-exposed work.
Analysis of WIOA administrative records (2017-2023) using a newly introduced 'Retrainability Index' that decomposes outcomes into post-intervention wage recovery and shifts in routine task intensity (RTI). The paper reports low incidence of downward RTI (movement into less automation-exposed occupations) among participants.
high negative Did US Worker Retraining Reduce Participant Automation Expos... change in Routine Task Intensity (RTI) of occupations post-participation
Frontier software engineering agents have saturated short-horizon benchmarks while regressing on the work that constitutes senior engineering: long-horizon, multi-engineer, ambiguous-specification deliverables.
Position asserted in the paper based on literature/benchmark trends and authors' field observations; no original empirical dataset or quantified analysis provided in the paper text excerpt.
high negative The Conversations Beneath the Code: Triadic Data for Long-Ho... performance on short-horizon benchmarks versus performance on long-horizon, mult...
Disparities may lead to AI bias and governance challenges that potentially leave the poorest communities excluded from the Fourth Industrial Revolution.
Paper lists AI bias and governance challenges as potential consequences of uneven AI development; presented as conceptual/ethical/political risks without empirical quantification in the excerpt.
high negative GLOBAL DISPROPORTIONS IN THE IMPLEMENTATION AND USE OF ARTIF... AI bias and governance failures leading to exclusion
These disparities risk causing economic isolation and social inequality.
Qualitative claim in the paper listing potential socio-economic risks of uneven AI adoption; no supporting empirical estimates in the excerpt.
high negative GLOBAL DISPROPORTIONS IN THE IMPLEMENTATION AND USE OF ARTIF... economic isolation and social inequality
These disparities carry the risk of a deepening digital divide.
Stated as a consequence/risk in the paper; presented qualitatively without empirical quantification in the excerpt.
high negative GLOBAL DISPROPORTIONS IN THE IMPLEMENTATION AND USE OF ARTIF... digital divide (differential access/use of digital technologies)
Projections indicate that without additional measures, these disparities are likely to increase.
Paper reports forward-looking projections or scenario analysis (methods, assumptions, and quantitative projection details not given in the excerpt).
high negative GLOBAL DISPROPORTIONS IN THE IMPLEMENTATION AND USE OF ARTIF... future global disparities / inequality in AI and digital access
Low-income regions (in particular parts of Africa and South Asia) lag significantly behind in both education and access to digital technologies.
Statement in the paper based on comparative assessment of education levels and digital access across regions; the excerpt provides no numeric data or described sample.
high negative GLOBAL DISPROPORTIONS IN THE IMPLEMENTATION AND USE OF ARTIF... education levels and access to digital technologies
Novices more often experience invisible failures: conversations that appear to end successfully but in fact miss the mark.
Annotation-based comparison in the 27K WildChat transcript sample indicating higher rates of 'invisible' failures (apparent successes that are actually incorrect or insufficient) among novice users.
high negative A paradox of AI fluency invisible failure rate (apparent success but incorrect outcome)
Fluent users experience more failures than novices.
Quantitative comparison of failure occurrences across user-fluency strata in the 27K annotated transcript sample from WildChat-4.8M.
high negative A paradox of AI fluency failure rate (errors / failed turns)
Workers acquire skills through generative AI tools but lack credible ways to signal or validate these skills in competitive freelance markets (a structural challenge the paper terms 'invisible competencies').
Reported finding and conceptual contribution based on the paper's mixed-methods study (survey + semi-structured interviews).
high negative Upskilling with Generative AI: Practices and Challenges for ... ability to signal/validate skills acquired via generative AI in freelance market...
There is a shift from learning as growth to learning as survival, where upskilling is oriented toward immediate market viability rather than long-term development.
Reported thematic finding from the paper's interviews and survey of freelance knowledge workers.
high negative Upskilling with Generative AI: Practices and Challenges for ... orientation of upskilling (immediate market viability vs long-term development)
Freelancers do not treat generative AI as their primary learning resource due to inconsistency, lack of contextual relevance, and verification overhead.
Reported finding from the paper's mixed-methods study (survey + semi-structured interviews with freelance knowledge workers).
high negative Upskilling with Generative AI: Practices and Challenges for ... role of generative AI in freelancers' learning stacks / barriers to using it as ...
Freelance workers must continually acquire new skills to remain competitive in online labor markets, yet they lack the organizational training, mentorship, and infrastructure available to traditional employees.
Framing statement in the paper's introduction / literature review (not reported as an empirical result from this study).
high negative Upskilling with Generative AI: Practices and Challenges for ... need for continual upskilling and availability of organizational training/mentor...
Suppression bias is the systematic suppression of correct-but-difficult recommendations when clinician capability falls below the execution threshold.
Definition and characterization of a proposed failure mode provided in the paper (conceptual/theoretical).
high negative Learning from Disagreement: Clinician Overrides as Implicit ... bias in recorded overrides leading to omission of correct-but-difficult recommen...
Obstacles exist for healthcare workers in rural areas that limit the benefits of technology.
Review conclusion noting persistent obstacles for rural healthcare workers drawn from the literature; synthesis of qualitative/quantitative sources (no sample size in excerpt).
high negative A Comprehensive Review of Technology Adoption and Its Impact... barriers to technology benefits in rural healthcare
Indian healthcare faces barriers to technological integration such as financial issues, poor infrastructure, and regulatory problems.
Review-identifed barriers drawn from the literature (qualitative and quantitative studies summarized by the authors); no aggregate sample size reported in the excerpt.
high negative A Comprehensive Review of Technology Adoption and Its Impact... barriers to technology adoption
The marginal gains from genAI came at the high cost of recruiter deskilling, a trend that jeopardizes meaningful oversight of decision-making.
Qualitative interview evidence (n=22) where participants described loss of skills/deskilling associated with genAI use and concerns about oversight.
high negative Resume-ing Control: (Mis)Perceptions of Agency Around GenAI ... deskilling / erosion of practitioner skills and oversight capacity
The decision of whether or not to adopt genAI was often outside recruiters' control, with many feeling compelled to adopt due to directives from higher-ups in their business.
Reports from interviewed recruiters (n=22) indicating organizational pressure and top-down calls to integrate AI.
high negative Resume-ing Control: (Mis)Perceptions of Agency Around GenAI ... decision-making autonomy over tool adoption
Recruiters believe they have final authority across the recruiting pipeline, but genAI has become an invisible architect shaping the foundational information used for evaluation (e.g., defining a job, determining what counts as a good interview performance).
Qualitative findings from interviews with 22 recruiting professionals describing perceived authority versus the influence of genAI on informational inputs.
high negative Resume-ing Control: (Mis)Perceptions of Agency Around GenAI ... perceived decision authority vs. shaping of evaluation criteria
GenAI subtly influences control over everyday recruiting workflows and individual hiring decisions.
Qualitative evidence from semi-structured interviews with 22 recruiting professionals (n=22).
high negative Resume-ing Control: (Mis)Perceptions of Agency Around GenAI ... perceived control/agency in workflows and hiring decisions
The research also identifies policy loopholes and unequal AI preparedness on the continent.
Findings from the paper's systematic review highlighting gaps in policy frameworks and uneven preparedness across Sub‑Saharan African countries; no country‑level counts or indices provided in the summary.
high negative The Impact of AI-Driven Automation on Semi and Unskilled Wor... presence of policy gaps and heterogeneity in AI preparedness across countries
Results indicate rising job displacement, industrial change, and inequality.
Aggregate findings reported from the systematic review pointing to increases in job displacement, structural industrial change, and inequality across studies; no aggregated numerical magnitudes provided in the summary.
high negative The Impact of AI-Driven Automation on Semi and Unskilled Wor... incidence of job displacement; extent of industrial/structural change; levels of...
They are a threat to semi-and unskilled jobs, particularly in manufacturing.
Conclusion from the systematic review synthesizing studies on automation risk to semi- and unskilled positions, especially in manufacturing; no numerical risk estimate provided in the summary.
high negative The Impact of AI-Driven Automation on Semi and Unskilled Wor... risk of displacement for semi‑ and unskilled manufacturing jobs
Vulnerable populations—including low-skill workers, aging labour forces, and developing economies—are especially affected by AI-driven changes.
Abstract highlights special attention to vulnerable populations in the review and asserts differential impacts; no specific empirical estimates or sample sizes provided in abstract.
high negative AI and the Transformation of Human Employment: Challenges, O... distributional effects / disproportionate adverse impacts on vulnerable groups
AI displaces routine cognitive and manual tasks.
Explicit finding reported in abstract based on the paper's systematic review of empirical studies (no individual study sample sizes or quantitative estimates provided in abstract).
high negative AI and the Transformation of Human Employment: Challenges, O... displacement of routine tasks / job_displacement for routine roles
The 2026 Amazon outages illustrate how 'mechanized convergence' (homogenization of code/engineering practices via AI) leads to systemic fragility.
Case study analysis using the 2026 Amazon outages as a single illustrative example; implies qualitative examination of that event.
high negative Cognitive Atrophy and Systemic Collapse in AI-Dependent Soft... systemic fragility as evidenced by outage events (2026 Amazon outages case study...
Recursive training on synthetic code threatens to homogenize the global software reservoir, diminishing the variance required for robust engineering.
Theoretical claim about dataset/model feedback loops; no empirical quantification provided in the text excerpt (argumentative risk assessment).
high negative Cognitive Atrophy and Systemic Collapse in AI-Dependent Soft... variance/diversity in global software codebase
This epistemological debt erodes the mental models essential for root-cause analysis, widening the gap between system complexity and human comprehension.
Argumentative/theoretical claim supported by reasoning in the paper; no quantified measurement of mental-model erosion reported.
high negative Cognitive Atrophy and Systemic Collapse in AI-Dependent Soft... quality/robustness of engineers' mental models and root-cause analysis capabilit...
Substituting logical derivation with passive AI verification creates an 'Epistemological Debt' — a hidden carrying cost incurred by engineers.
Theoretical/conceptual assertion within the paper; argued qualitatively rather than demonstrated with controlled empirical data.
high negative Cognitive Atrophy and Systemic Collapse in AI-Dependent Soft... accumulation of epistemic/knowledge debt among engineers
The integration of Large Language Models (LLMs) into the software development lifecycle (SDLC) masks a critical socio-technical failure the authors term 'Cognitive-Systemic Collapse.'
Conceptual/theoretical claim presented in the paper's argumentation; no empirical sample or quantitative study reported for this specific naming claim.
high negative Cognitive Atrophy and Systemic Collapse in AI-Dependent Soft... socio-technical system failure risk (Cognitive-Systemic Collapse)
There is limited but suggestive early evidence of labor market disruption from AI/LLMs.
Paper summarizes emerging empirical research indicating early signs of disruption; the abstract characterizes the evidence as limited and suggestive without presenting numeric estimates or sample sizes.
high negative AI Displacement Risk in the Labor Market: Evidence, Exposure... labor market disruption (e.g., displacement, reallocation)
Certain occupations face the greatest risk from AI-driven automation (the article examines which occupations are most at risk).
Paper claims to examine occupation-level risk using synthesized empirical studies; the abstract does not list which occupations or quantitative risk estimates.
high negative AI Displacement Risk in the Labor Market: Evidence, Exposure... occupation-level risk of automation / exposure to AI
There is a gap between theoretical automation potential and observed real-world implementation of AI/LLMs.
Synthesis of recent empirical studies that compare task-level exposure metrics with employment and usage data; no specific sample sizes or numeric estimates provided in the abstract.
high negative AI Displacement Risk in the Labor Market: Evidence, Exposure... difference between theoretical automation potential and actual adoption/implemen...
Privacy law encounters difficulties in addressing large-scale data processing and meaningful consent within employment relationships; anti-discrimination law faces evidentiary challenges in identifying algorithmic bias; doctrines of responsibility are expanding to encompass duties of oversight, verification, and explainability.
Legal analysis highlighting specific doctrinal challenges and emergent duties; no empirical tests or quantified measures included in the excerpt.
high negative Artificial Intelligence in Israel, Trends, Developments, and... effectiveness of specific legal doctrines (privacy, anti-discrimination, respons...
Traditional legal categories (privacy, consent, non-discrimination, employer responsibility) continue to apply formally but are increasingly strained in substance by the scale of data processing, opacity of AI systems, and their degree of autonomy.
Doctrinal critique and conceptual analysis provided in the paper; no empirical quantification of the degree of strain is supplied in the excerpt.
high negative Artificial Intelligence in Israel, Trends, Developments, and... fit/adequacy of existing legal doctrines to address AI-related employment issues
The decentralized and sector-specific regulatory approach reflects technological neutrality but exposes significant regulatory gaps, particularly with respect to transparency, accountability, and the protection of workers' rights.
Normative/legal analysis in the paper identifying gaps in a decentralized regulatory regime; specific case studies or empirical measures of gaps not provided in the excerpt.
high negative Artificial Intelligence in Israel, Trends, Developments, and... regulatory completeness and coverage regarding transparency, accountability, and...
Israel has not enacted a comprehensive statutory framework specifically governing the use of AI in the field of employment; regulation is implemented through a hybrid model of indirect application of existing legal doctrines (primarily privacy and labor law), soft-law instruments, collective bargaining agreements, and internal organizational and professional regulation.
Doctrinal and regulatory analysis reported in the paper describing Israel's legal/regulatory landscape; no legislative text counts or timeline analysis provided in the excerpt.
high negative Artificial Intelligence in Israel, Trends, Developments, and... existence and form of statutory and regulatory frameworks governing AI in employ...
At the structural and macroeconomic level, artificial intelligence is reshaping the balance of power within the labor market and contributes to a gradual shift toward employer-driven dynamics.
Author's macroeconomic and structural analysis as presented in the paper; no specific datasets, methods, or sample sizes are reported in the excerpt.
high negative Artificial Intelligence in Israel, Trends, Developments, and... balance of power in the labor market (employer vs. worker influence)
The supply of AI-literate workers attenuates wage inequality effects.
Presented in the article as a distributional mechanism informed by synthesized theoretical and empirical findings; no concrete empirical methods or sample sizes are provided in the excerpt.
Ethical concerns—such as transparency, explainability, psychological effects, and responsible AI governance—are critical factors influencing employability outcomes.
Review synthesis highlighting ethical issues from empirical and industry literature as influential on employability outcomes.
high negative The Impact of AI on Employability and Evolving Job Roles of ... ethical concerns' impact on employability
There are significant AI adoption challenges in education and industry that affect employability and role transformation.
Synthesized evidence from industry reports and empirical studies discussed in the review highlighting barriers to adoption in education and industry.
Algorithmic management and monitoring have reduced employees’ autonomy and perceived work meaningfulness, contributing to 'AI anxiety' characterised by concerns about job loss, skill obsolescence, and diminished control.
Qualitative studies, survey evidence, and theoretical literature reviewed that document impacts of algorithmic management on autonomy, meaningfulness, and worker anxiety (mixed-methods literature).
high negative From Technological Substitution to Institutional Response: A... employee autonomy, perceived work meaningfulness, and AI-related anxiety
Automation has intensified income inequality between high-skilled and low-skilled workers.
Synthesis of empirical literature linking automation adoption to widening wage and income gaps across skill groups (literature review).
high negative From Technological Substitution to Institutional Response: A... income/wage inequality between skill groups
Displacement effects have extended from manufacturing into cognitive roles such as clerical work and customer service.
Review of empirical studies documenting automation/substitution effects in cognitive, clerical, and customer-service roles (literature synthesis).
high negative From Technological Substitution to Institutional Response: A... occupational displacement in cognitive/clerical/customer-service roles
Automation has put downward pressure on wages.
Cited empirical studies and wage analyses in the reviewed literature indicating wage suppression associated with automation adoption (literature review).
high negative From Technological Substitution to Institutional Response: A... wage levels / wage pressure
AI and robotics have led to contractions in low-skilled occupations.
Synthesis of empirical literature reporting occupational contractions in low-skilled jobs following automation adoption (literature review).
high negative From Technological Substitution to Institutional Response: A... contraction in employment in low-skilled occupations
Extensive empirical evidence shows that AI and robotics can substitute for rule-based, codifiable routine tasks.
Review cites extensive empirical studies demonstrating substitution of rule-based, codifiable routine tasks by AI/robotics (literature synthesis).
high negative From Technological Substitution to Institutional Response: A... substitution of routine tasks (automation exposure)