Evidence (2608 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Skills Training
Remove filter
Income inequality, measured by the Gini index, rises moderately in every scenario we examine due to the polarising effect of job losses and wage and capital income increases on the income distribution.
Calculation of Gini index across multiple simulated scenarios using the SWITCH-linked distributional analysis; reported in the report.
The largest average losses are experienced by middle and higher income households, for whom job displacement outweighs any wage or capital income gains. Lower income households also lose, but by much less.
Distributional results from microsimulation (SWITCH) applying scenarioled job displacement, wage and capital effects across income groups; reported in the report.
When these effects are combined, we find an average decline in household disposable income as a result of AI adoption.
Combined scenario simulations incorporating job displacement, wage effects and capital income effects linked to the Irish tax-benefit system using SWITCH; result reported in the report's main findings.
These wage gains are not large enough to counterbalance the average fall in income due to job displacement.
Combined simulation results (displacement + wage effects) using scenario assumptions and microsimulation (SWITCH), reported in the report's distributional analysis.
Those most likely to experience this disruption are found in higher income households, where the share of workers transitioning into unemployment is substantially larger than in lower income families.
Microsimulation (SWITCH) linking simulated job displacement scenarios to household income groups; results reported in the report.
In our central scenario — drawn from credible international estimates — around 7 per cent of current jobs could be displaced in the short–medium run.
Scenario simulation based on international estimates of AI exposure/adoption; central scenario reported in the report (linked to SWITCH microsimulation for distributional analysis).
AI tends to place higher earning and highly educated workers at greater risk of disruption, because the occupations most exposed to AI are predominantly in these groups.
Synthesis of international research on occupational exposure to AI and the report's analysis linking exposure to worker characteristics (education and earnings); presented as descriptive finding in the report.
Result 2: When managers are short-termist or worker skill has external value, the decision-maker's optimal policy can produce the augmentation trap, leaving the worker worse off than if AI had never been adopted.
Analytical result from the dynamic model comparing planner/objective variations (short-termist manager or externalities) and showing an outcome labeled the 'augmentation trap'.
Result 1: Even a decision-maker who fully anticipates skill erosion rationally adopts AI when front-loaded productivity gains outweigh long-run skill costs, producing steady-state loss: the worker ends up less productive than before adoption.
Analytical result from the dynamic model showing optimal adoption choice can lead to a steady-state where worker productivity is lower than pre-adoption (model-based comparative statics).
Experimental evidence shows that sustained use of AI tools can erode the expertise on which productivity gains depend (deskilling).
Statement in paper referencing experimental studies (no specific study, method, or sample size reported in the excerpt).
These dynamics risk trapping workers in a 'low-skill trap'.
Synthesis of observed labour-market polarisation, persistent low-skill segment, and limited reskilling coverage from secondary sources (2020–2024); presented as a likely risk/consequence.
Limited reskilling coverage constrains workers' ability to adapt to AI-driven changes.
Paper reviews official reports and secondary data (2020–2024) indicating low coverage/uptake of reskilling programs in India and links this to limited adaptation capacity.
AI-driven change is intensifying wage disparities.
Paper links observed occupational shifts in secondary data (2020–2024) with widening wage gaps between high- and lower-skilled groups.
Routine middle-skilled roles are declining.
Secondary data and official reports from 2020–2024 documenting reductions in middle-skill occupations, interpreted through SBTC/Human Capital frameworks.
There is a 'capability-demand inversion' where skills most demanded in AI-exposed jobs are those LLMs perform least well at in our benchmark.
Cross-referencing SAFI performance with Anthropic Economic Index demand data (reported in paper); described as an observed inversion pattern.
We posit that persistence is reduced because AI conditions people to expect immediate answers, denying them the experience of working through challenges on their own.
Authors' proposed psychological mechanism / explanation inferred from observed behavior; presented as a hypothesis rather than directly proven causal mediator.
These negative effects (reduced persistence and impaired unassisted performance) emerge after only brief interactions with AI (approximately 10 minutes).
Experimental manipulation / exposure in RCTs where participants interacted with AI for about 10 minutes and subsequent outcomes were measured.
People are more likely to give up after interacting with AI (increased likelihood of quitting tasks unassisted).
Randomized controlled trials (N = 1,222) measuring rates of task abandonment/giving-up after AI interaction vs. control.
AI assistance impairs unassisted performance: although AI improves short-term performance, people perform significantly worse without AI after interacting with it.
Randomized controlled trials (N = 1,222) comparing performance with and without AI assistance across tasks; causal inference from randomized assignment.
Through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence that AI assistance reduces persistence.
Randomized controlled trials (RCTs) on human-AI interactions with total sample size N = 1,222; persistence measured after AI interaction across tasks.
Occupations are not eradicated instantaneously, but gradually encroached upon via atomic actions.
Conceptual argument presented by the authors as part of their theoretical framing (Tech-Risk Dual-Factor Model); no empirical count reported for this specific claim.
Existing task-based evaluations predominantly measure theoretical "exposure" to AI capabilities, ignoring critical frictions of real-world commercial adoption: liability, compliance, and physical safety.
Authoritative statement in paper contrasting prior task-based exposure evaluations with the paper's focus on business/institutional frictions (liability, compliance, physical safety). No numeric sample; literature critique based on conceptual analysis.
Up to 25% of routine administrative tasks face high automation risk.
Quantitative survey of 150 leading Nigerian firms across finance, tech, and manufacturing reporting the share of tasks at high automation risk.
There is a significant deficit in high-demand technical competencies such as data engineering, machine learning maintenance, and AI ethics within the Nigerian workforce.
Findings reported from the quantitative survey of 150 leading Nigerian firms (finance, tech, manufacturing) supplemented by qualitative workforce interviews and policy analysis.
Practitioners identified specific functional deficiencies in AI: inability to maintain sustained partnerships.
Theme from semi-structured interviews with 10 practitioners; cited as an example of the functional gap.
Practitioners identified specific functional deficiencies in AI: inability to adapt contextually.
Theme from semi-structured interviews with 10 practitioners; cited as an example of the functional gap.
Practitioners identified specific functional deficiencies in AI: inability to negotiate responsibilities.
Theme from semi-structured interviews with 10 practitioners; cited as an example of the functional gap.
Practitioners currently view AI models as intellectual teammates rather than social partners and expect fewer SEI attributes from them than from human teammates.
Qualitative findings from semi-structured interviews with 10 software practitioners reported in the study.
Current AI systems lack SEI capabilities that humans bring to teamwork, creating a potential gap in collaborative dynamics.
Framed as background/context in the paper; asserted rather than empirically tested in this study.
Unbalanced or poorly governed adoption of Big Data and AI contributes to increased systemic risk, cybersecurity vulnerability, regulatory fragmentation and third-party dependence on BigTech platforms.
Argument based on qualitative literature review and synthesis of international empirical studies and comparative sector analysis; no single-sample empirical study in this paper.
Extreme automation (high AI intensity) causes employment decline.
Part of the U-shaped relationship reported by the paper's empirical results; described qualitatively in the abstract/summary.
Task orchestration is the most under-researched dimension among the five workplace-design components.
Finding from the PRISMA-guided systematic review of 120 papers, which mapped coverage across the five dimensions and identified task orchestration as having the least research attention.
Decision authority allocation emerges as the binding constraint for Society 5.0 transitions.
Result synthesized from the systematic review and theoretical analysis mapping the five workplace-design dimensions; stated as the binding constraint in the paper's findings.
Under low emotional intelligence, the model predicts higher risks of over-reliance on AI, emotionally detached communication, and weaker delegation quality.
Theoretical predictions derived from the EI-moderated human–AI model presented in the paper.
Kerangka hukum ketenagakerjaan Indonesia saat ini bersifat reaktif, dengan fokus pada kompensasi pasca-PHK yang belum mampu menjawab dampak jangka panjang disrupsi AI.
Analisis normatif terhadap peraturan perundang-undangan dan temuan dari literatur yang ditinjau; kesimpulan yang dilaporkan oleh penulis penelitian.
Belum terdapat pengaturan eksplisit mengenai kewajiban pelatihan ulang (retraining) maupun mekanisme distribusi manfaat teknologi secara adil dalam kerangka hukum ketenagakerjaan Indonesia saat ini.
Temuan dari analisis peraturan perundang-undangan nasional (UU Cipta Kerja dan peraturan turunannya) dan literatur yang dikaji dalam penelitian normatif.
Fenomena adopsi AI menimbulkan tantangan hukum terkait perlindungan hak pekerja, keadilan sosial, dan keberlanjutan sistem ketenagakerjaan.
Analisis normatif terhadap konsekuensi sosial-ekonomi AI yang disintesis dari literatur nasional (SINTA) dan internasional; pendekatan konseptual dan komparatif dijelaskan dalam metode.
Perkembangan pesat Artificial Intelligence (AI) telah membawa perubahan mendasar dalam struktur pasar tenaga kerja di Indonesia dengan meningkatnya risiko penggantian pekerjaan manusia oleh teknologi otomatisasi.
Pernyataan latar belakang yang didukung oleh tinjauan literatur pada jurnal nasional terindeks SINTA dan jurnal internasional bereputasi (metode: penelitian hukum normatif dengan pendekatan perundang-undangan, konseptual, dan komparatif).
The common claim that generative AI simply amplifies the Dunning–Kruger effect is too coarse to capture the available evidence.
Paper's synthesis of heterogenous empirical findings from human–AI interaction, learning research, and model evaluation used to critique the uniform-amplification interpretation; no single empirical countertest reported.
LLM use degrades metacognitive accuracy and flattens the classic competence–confidence gradient across skill groups (i.e., reduces calibration and narrows differences in self-assessed confidence by skill level).
Synthesis of studies from human–AI interaction and learning research reported in the paper that document worsened calibration and a reduction in the competence–confidence gradient when users rely on LLM outputs; the paper does not report a single combined sample size.
Prominent studies predict substantial job displacement due to automation.
Paper asserts this as background, referencing the existence of prominent studies in the literature (no specific citations or sample sizes provided in the abstract).
The financial planning and investment management profession is undergoing a radical transformation driven by Generative AI (GenAI) and Agentic AI, creating urgent workforce displacement challenges that require coordinated government policy intervention alongside educational reform.
Author assertion in the paper's introduction/abstract; framing argument based on the paper's synthesized analysis (no empirical sample, no reported statistical test).
AI-driven job displacement disproportionately affects low-skilled workers.
Reported empirical result from the paper's PLS-SEM analysis on the 351-respondent dataset.
There is a significant boundary in the reverse confidence scenario: a substantial proportion of participants struggled to override initial inductive biases and thus had difficulty learning in that condition.
Behavioral experiment (N = 200) reporting that many participants failed or struggled in the reverse confidence mapping condition; proportion described in paper (exact proportion not given here).
Currently, the region remains reactive as a 'recipient' rather than a 'creator' or an effective partner in the AI ecosystem.
Characterization reported by the authors based on their regional research and field study (qualitative findings from leaders across public/private sectors).
This gap hinders the ability of many governments in the region to push their countries toward joining the ranks of those benefiting from the AI revolution—both in developing the public sector and supporting economic growth and social development.
Authors' analysis and interpretation based on the regional research/field study described in the report.
The Arab region’s capacity for Artificial Intelligence (AI) governance remains limited relative to the accelerating pace of global AI developments and associated challenges.
Stated conclusion in the executive report based on a regional field study (authors' analysis of interviews/surveys and research across the region).
As artificial intelligence assumes cognitive labor, no existing quantitative framework predicts when human capability loss becomes catastrophic.
Introductory/background claim asserted by authors motivating the study (literature gap claim).
Broader AI scope lowers the critical threshold K* (i.e., more general AI reduces the K* value at which capability collapse occurs).
Model sensitivity analysis / simulations showing K* varies with assumed scope of AI (reported in model calibration discussion).
The model identifies a critical threshold K* approximately 0.85 (scope-dependent; broader AI scope lowers K*) beyond which capability collapses abruptly — the 'enrichment paradox.'
Model analysis and simulations calibrated across domains (paper reports computed threshold K* ≈ 0.85 and notes dependence on AI scope).