Evidence (2469 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Org Design
Remove filter
Tasks that workers associate with a sense of agency or happiness may be disproportionately exposed to AI.
Empirical finding based on the paper's worker and developer surveys on 171 tasks, with LM scaling to 10,131 tasks; phrased cautiously in the paper as 'may be' disproportionately exposed.
There is a growing tension between relatively rigid education and training systems and the rapidly changing skill requirements of digitally driven labor markets.
Argument motivated and supported by comparative assessment of international practices and systemic analysis; descriptive/comparative evidence rather than quantified empirical testing.
Information saturation from AI output contributes to cognitive overload among employees.
Grounded in the paper's application of cognitive load theory to findings from surveys and organizational research; the excerpt gives no direct measures of information volume or its direct cognitive effects.
Extensive AI use correlates with measurable productivity losses.
Paper states this correlation is observed in organizational research and large-scale surveys; the excerpt lacks details on productivity measures, sample sizes, or statistical controls.
Extensive AI use correlates with increased decision fatigue.
Reported correlation based on the same cited large-scale surveys and organizational research; no methodological details or effect sizes provided in the excerpt.
Extensive AI use correlates with increased turnover intention among employees.
Paper reports correlations observed in recent large-scale surveys and organizational research; the excerpt does not provide correlation coefficients, sample sizes, or control variables.
AI-augmented work environments create cognitive overload through information saturation, relentless task-switching, and the demanding oversight of multiple AI agents.
Synthesis in the paper drawing on research on human-AI collaboration and cognitive load theory and citing organizational research; specific empirical methods or sample sizes not provided in the excerpt.
Employees using AI extensively report significant mental fatigue, dubbed 'AI brain fry.'
Stated in the paper as derived from recent large-scale surveys and organizational research; no specific sample size, survey instrument, or statistical details provided in the text excerpt.
O SCF é expandido para uma camada de segunda ordem (SCF-E) que incorpora déficit de imaginação tecnocultural e governança simbólica, explicando por que a IA permanece em pilotos e não se converte em capacidade organizacional.
Extensão conceitual (segunda ordem) relatada no artigo; respaldada metodologicamente pela combinação QUAN→QUAL, incluindo etnografia orientada ao SCF (detalhes empíricos no corpo do artigo, não no resumo).
A literatura de adoção tecnológica (TAM, UTAUT, Difusão de Inovações) tende a tratar a resistência como variável comportamental genérica ou deficiência de 'treinamento', negligenciando dimensões simbólicas (ritos, identidades e poder), mecanismos cognitivos de ameaça (aversão à perda, sobrecarga e heurísticas) e seus efeitos econômicos.
Revisão bibliográfica e posicionamento teórico declarado no artigo comparando modelos consagrados com a perspectiva proposta; sem indicação de meta-análise ou contagem empírica no resumo.
A Fricção Psicoantropológica (SCF) é proposta e detalhada como um coeficiente mensurável do custo cultural e da resistência cognitiva que reduz a capacidade de pequenas e médias empresas (PMEs) de transformar iniciativas de Inteligência Artificial (IA) em geração de valor em escala.
Proposição teórica e operacionalização apresentada no artigo; desenho metodológico descrito como QUAN→QUAL incluindo construção de escala psicométrica e etnografia organizacional. O resumo não especifica tamanho de amostra para validação.
Over-reliance on data-driven insights without adequate human oversight can worsen market uncertainty.
Reported in the study's qualitative case studies and interpretive analysis as a potential negative consequence of improper AI/Big Data use (no quantified examples provided in the summary).
Algorithmic bias is a potential pitfall of using AI and Big Data that can exacerbate market uncertainty.
Identified as a risk in the paper's qualitative analysis and discussion of pitfalls (no incident counts or empirical quantification provided in the summary).
External pressures (e.g., pandemics, extreme weather, geopolitical conflicts) disproportionately affect peripheral suppliers in the construction supply chain network.
Mapping of challenge categories to network positions in the study showed external pressures concentrating at peripheral supplier nodes; based on interview reports and network coding (quantitative support not detailed in abstract).
Relationship and contract issues accumulate at high-centrality brokers, which exhibit a reported degree centrality of 0.818.
Result reported in the paper linking the thematic category (relationship/contract issues) to network nodes identified as high-centrality brokers; a numeric degree centrality value (0.818) is reported for these brokers. Underlying network constructed from thematic coding of interviews; sample size not provided in abstract.
Six main challenge categories (comprising 16 open codes) concentrate systematically at specific network positions.
Results reported: thematic grouping produced six challenge categories and 16 open codes, and these were mapped to positions in the network showing systematic concentration; underlying data derive from coded interviews and network mapping (sample size not given in abstract).
Short-run labor market disruptions raise concerns regarding wage inequality and workforce adaptation.
Claims based on observed short-run labor market adjustments in publicly available data and theoretical implications for inequality and adaptation; specific empirical measures, time horizons, and sample sizes are not reported in the excerpt.
AI simultaneously increases adjustment pressures for routine tasks.
Argument and cited observations from publicly available labor market data indicating displacement or adjustment in routine-task-intensive occupations (no specific empirical estimates or samples provided).
The Cautious are held in organizational stasis: without early adopter examples they don't enter the virtuous adoption cycle, never accumulate the usage frequency that drives intent, and never attain high efficacy.
Comparative analysis of archetype subgroups in the survey (N=147) showing the 'Cautious' group has lower reported usage frequency, lower intent to increase usage, and lower self-reported efficacy relative to 'Enthusiasts' and 'Pragmatists'.
Adoption of AI testing tools lags that of coding tools, creating a 'Testing Gap'.
Within-sample comparison of reported adoption rates for coding-oriented AI tools versus testing-oriented AI tools among 147 developers, showing lower adoption for testing tools.
Security concerns remain a moderate and statistically significant barrier to adoption.
Survey-derived security-concern metric (N=147) that shows a statistically significant negative association with future adoption intention (reported as moderate in effect size).
Traditional human resource management (HRM) approaches in hospitals rely on manual processes that are prone to errors, lack adaptability, and fail to adequately balance staff preferences with patient care requirements.
Background/positioning statement in the paper; asserted based on literature and authors' motivation for proposing an AI-driven framework (no specific dataset or quantitative analysis provided for this claim).
Simulations project measurable reductions in defect rates under AI-HRM scenarios.
Regression-based simulations of the counterfactual model include defect reduction as an organizational outcome and project decreases in defect rates when HR processes are AI-supported.
Simulations show notable reductions in absenteeism under the AI-HRM scenario.
Predictive estimation and regression-based simulations projecting absenteeism rates under counterfactual AI-supported HR processes using the industrial firm dataset.
AI integration into resort-to-force decision-making organizations raises important concerns.
Conceptual claim discussed by the author; the paper does not present empirical data, incident analyses, or quantified risk assessments supporting this claim within the provided excerpt.
Governing the complexity introduced by military AI integration is urgent but currently lacks clear precedents.
Authorative claim grounded in argumentation and review-style reasoning; no systematic review or empirical mapping of precedents is provided in the text.
We can expect increased organizational complexity in military decision-making institutions as AI proliferates.
Theoretical inference presented by the author; no empirical methods or measurements (e.g., complexity metrics, case studies, or sample sizes) are reported.
When positives are rare, the prevalence effect induces systematic cognitive biases that inflate misses and can propagate through the AI lifecycle via biased training labels.
Analysis of prior experimental evidence cited and discussed in the paper (literature review / synthesis). Specific prior studies and their methods are analyzed in the paper (sample sizes and individual study details not provided in the supplied excerpt).
Many core university functions can now be achieved through AI-powered alternatives, potentially rendering conventional models obsolete for many learners.
Analytical assessment by the authors, without reported empirical testing or quantified methodology; based on review of AI capabilities and extrapolation.
Universities' core value proposition is challenged and potentially displaced by AI technologies as they alter how knowledge is accessed, created, and validated.
Authors' analytical argument drawing on technological, economic, and social drivers; presented as synthesis rather than empirical proof (no sample size or empirical method reported).
Traditional IT service hiring will be displaced by expansion of product-focused roles and Global Capability Centres (GCCs).
Synthesis of industry reports and workforce data indicating shifts in hiring patterns; the abstract does not report sample sizes or exact metrics.
Psychological barriers — specifically algorithm aversion, AI-induced job insecurity, technostress, and diminished occupational identity — impede effective AI integration across U.S. industries.
Literature synthesis of empirical and theoretical work in AI–HRM and organizational psychology cited in the paper (summary does not report primary-study sample sizes).
Workforce psychological readiness, rather than technological capability alone, constitutes the critical bottleneck in organizational AI adoption.
Synthesis of emerging empirical AI–HRM research and theoretical integration (paper reports 'findings' from this synthesis; no primary-sample-size details provided in the summary).
The integration of AI into U.S. workplaces represents a profound organizational psychology challenge that extends well beyond mere technology adoption.
Conceptual/theoretical argument based on literature synthesis; draws on established theories (Technology Acceptance Model, Human–AI Symbiosis Theory, Job Demands–Resources Model, Organizational Trust Theory) and cited empirical AI–HRM studies (no specific sample sizes or primary data reported in the summary).
What remains needed is rigorous advice to policymakers concerned about rapid increases in labor churn, scientific development, labor–capital shifts, or existential risk.
Normative conclusion drawn by the author from gaps identified in the seven-book review (qualitative assessment of unmet policy-relevant analysis); sample = 7 books.
The reviewed works offer little guidance regarding the transformative scenarios considered plausible by many AI researchers.
Author's evaluative judgment based on the content and emphases of the seven books (qualitative gap analysis); sample = 7 books.
AI heightens job insecurity, particularly in organisations lacking structured reskilling programs.
Stated finding derived from the mixed-method study and Scopus database analysis; framed with a conditional modifier pointing to organisations without structured reskilling programs. (Summary does not provide sample size, effect sizes, or statistical significance.)
The stability and patience that define long-term investors can breed strategic inertia.
Introductory assertion in the paper (conceptual observation). The paper does not present empirical data or sample analysis to substantiate this causal claim in the provided excerpt.
Conventional thinking often frames AI uncritically as just a tool for efficiency, which is a narrow perspective that overlooks AI's transformative role.
Critical/theoretical argument presented in the paper (conceptual observation). No empirical data, sample, or statistical analysis reported to support this claim.
In abundant-resource conditions, emergent tribe formation slightly increases system overload (i.e., makes the near-zero overload slightly worse).
Empirical observations reported in the paper indicating a modest increase in overload when tribes form under abundant resources.
When resources are scarce, AI model diversity and reinforcement learning increase dangerous system overload.
Empirical results from the paper's AI-agent population experiments (simulations/real-agent trials) combined with mathematical analysis indicating increased overload under scarcity when model diversity and individual RL are present.
Combined analysis using Fuzzy PROMETHEE II and DEMATEL identifies High Initial Investment and Supply Chain Integration as critical barriers and dominant causal drivers that influence other dependent barriers.
Findings come from the integrated PROMETHEE II ranking and DEMATEL causal-mapping analyses based on expert input and literature review; detailed sample size and numerical results not provided in the summary.
There are challenges to adopting AI in HRM within IT firms.
Identified through the literature review and the empirical study involving HR professionals; the summary notes challenges but does not enumerate or quantify them.
We lack frameworks for articulating how cultural outputs might be actively beneficial.
Authors' identification of a gap in evaluation theory and practice (conceptual analysis); no systematic literature review details provided in the excerpt.
Current AI evaluation practices show a critical asymmetry: while AI assessments rigorously measure both benefits and harms of intelligence, they focus almost exclusively on cultural harms.
Authors' review/ critique of existing evaluation frameworks and metrics (qualitative analysis in the paper); the excerpt does not list the reviewed studies or their number.
The field of AI is unprepared to measure or respond to how the proliferation of entertaining AI-generated content will impact society.
Authors' assessment of current evaluation practices and frameworks (qualitative analysis presented in the paper); no empirical metrics or sample sizes provided in the excerpt.
Current literature has primarily focused on automation-based views of decision support and lacks insight into systematic human–AI coordination aided by analytics.
Literature review and conceptual critique within the paper. No systematic mapping study or bibliometric counts reported.
Most organizations have difficulties converting algorithmic results into sustainable managerial decisions due to low levels of trust, lack of explanation, and poor integration between AI systems and human judgment.
Synthesis of existing literature presented in the conceptual paper (literature review). No empirical study or sample provided to quantify 'most organizations.'
AI adoption has augmented complexity, uncertainty in decision-making, and accountability stresses for managers.
Claim supported by conceptual argument and literature integration (qualitative synthesis). No empirical sample size or quantitative testing reported.
Traditional methods for assessing and developing employees' skills often fail to provide real-time feedback.
Statement supported by literature review cited by the authors; the abstract does not provide empirical comparisons, metrics, or sample sizes.