Evidence (3224 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Labor Markets
Remove filter
Centralized governance architectures can favor integrated platform vendors (bundled low-code + RPA + AI + policy engines) or create opportunities for governance-layer specialists, affecting competition and lock-in.
Market-structure implication argued through economic and industry reasoning; supported by observations of vendor dynamics in practitioner examples but not by systematic market analysis.
Enabling safer deployment of higher-risk automations may increase displacement of routine cognitive tasks while creating demand for governance, compliance, and AI oversight roles.
Projected labor-market effect based on task composition reasoning and practitioner expectations; suggested as a likely outcome but not empirically measured in the paper.
Insurers may revise underwriting, raise premiums, or exclude certain AI-related exposures until risk assessments improve; new insurance products may emerge for AI governance failures.
Policy and market impact speculation based on perceived risk; no empirical insurer responses or underwriting data provided.
Firms will reallocate resources toward AI governance, monitoring tools, and skilled auditors (increasing compliance and labor costs), and demand for products/services (prompt-provenance tools, watermarking, AI forensic services, certified-safe LLMs) will rise.
Market/economic projection based on the identified threat and presumed demand for mitigations; speculative without market-data support in the paper.
Demand for labor may shift from routine instrument operation and image processing toward higher-level tasks (experiment design, oversight, interpretation), and LLMs may amplify productivity of skilled scientists, potentially increasing wage premia for those who supervise AI-guided workflows.
Labor-economics reasoning and analogy to prior automation effects; no empirical labor-market or wage data presented specific to microscopy.
Implication for AI/platform economics: complementarities between public funding and digital (AI-enabled) platforms can convert public demand into decentralized labor opportunities, reshaping sectoral employment without growth in traditional firms.
Conceptual extension of empirical findings on platform-mediated cultural employment and fiscal procurement interactions; evidence comes from city-level DID results and inferred platform-activity proxies (280 cities, 2008–2021).
Smart power strategies that promote domestic AI champions (via procurement, subsidies, industrial policy) affect labour markets, inequality, and international labour arbitrage.
Conceptual claim grounded in literature on industrial policy and labour economics with policy examples referenced; no primary microdata analysis in the paper.
Widespread adoption of formal governance could lower systemic risk from enterprise AI failures, whereas heterogeneous adoption may create winners and losers based on governance quality.
Conceptual systems-level argument and comparative-case reasoning; no quantitative systemic-risk modeling or empirical evidence provided.
Greater automation of routine ERP/CRM tasks will displace some operational roles while increasing demand for governance, oversight, and AI-engineering skills, shifting labor toward higher-skill, higher-wage tasks.
Theoretical labor-market implication derived from the pattern's effects on task automation and governance needs; based on qualitative synthesis, not empirical labor-market analysis.
Risk-adjusted total cost of ownership (TCO) may fall if governance prevents costly incidents (e.g., compliance fines, data breaches), despite higher upfront costs.
Conceptual economic argument supported by qualitative examples and best-practice reasoning; no empirical ROI or incident-rate data presented.
Expensive formalization may push firms either to remain informal (preserving low-cost labor) or to automate instead of hiring formally; policy choices that lower formalization costs could retain jobs that otherwise would be automated.
Analytical inference from the measured CFIL and NWC values across the 19 countries and standard economic reasoning about cost-driven firm choices; the note does not present micro-level causal tests of these pathways.
Macroeconomic policy should monitor aggregate demand effects from reallocation and inequality; active fiscal and monetary coordination may be required to manage aggregate impacts of AI-driven reallocation.
Synthesis and policy implication drawing on macroeconomic reasoning and literature linking redistribution and demand to overall employment and growth; not presented as a single causal empirical result.
AI diffusion may widen inequality across education and regions and potentially reduce labor supply among financially constrained households.
Derived implication from heterogeneous negative associations between AI-rich regions and employment intention for low-educated and financially-constrained respondents in the cross-sectional sample (n=889).
Risk of platform shutdown (platform mortality) shapes user behavior by reducing incentives to invest time/effort configuring agents, creating stranded-asset-like risks.
Qualitative observations and economic reasoning linking user reports/behaviors to perceived platform risk during the one-month observational period; no formal economic measurement or causal identification.
If verified, explainable GLAI is priced higher due to compliance costs, access-to-justice gaps may widen as lower-cost but riskier offerings persist or services become more expensive.
Distributional reasoning linking higher compliance costs to price increases and access effects; supported by illustrative examples, no empirical price or access data.
Routine, unrestrained adoption of GLAI without enforceable mechanisms for effective human review threatens judicial independence and rights protections.
Normative and legal argumentation supported by conceptual analysis and illustrative scenarios. No empirical causal evidence; projection based on theoretical risk pathways.
There is a risk of deskilling, especially for trainees receiving reduced diagnostic practice when AI automates routine tasks.
Conceptual arguments supported by qualitative reports and limited observational findings; empirical longitudinal evidence quantifying deskilling is sparse.
Such disjointed strategies cannot manage the systemic socio-economic disruption ahead.
Asserted in abstract as a conclusion/argument; no empirical evaluation described in the abstract.
AI threatens to fracture the 20th-century social contract.
Asserted in abstract as a normative/predictive claim; no empirical support described in the abstract.
Unequal GenAI adoption has implications for productivity, skill formation, and economic inequality in an AI-enabled economy.
Interpretation/implication drawn from observed gendered adoption patterns in the 2023–2024 UK survey and literature on technology diffusion and labor-market impacts (no direct empirical measurement of downstream economic effects in the paper).
AI-driven productivity gains may not translate into broad-based demand if income is concentrated among capital owners, which could dampen aggregate profitability over time.
Theoretical argument grounded in Mandel-like distributional mechanics and demand-driven growth literature; speculative without empirical aggregation tests in the paper.
Concentration of curated datasets and restrictive IP can create monopolistic rents and underprovision of public‑good datasets, implying policy interventions (data sharing incentives/standards) may be required.
Economic reasoning about market formation and data as a scarce asset; no empirical market analysis provided in summary (theoretical implication).
These infrastructural and access constraints create unequal starting points that can amplify later disparities in labor-market preparedness.
Inference drawn from observed survey disparities in access, hands-on training, and preparedness; the study did not directly measure labor-market outcomes but links preparedness to potential labor-market effects in discussion.
Top-down AI guidance from institutions is common, while grassroots input from educators and students is often missing, which reduces policy relevance and uptake.
Survey items and thematic coding indicating the origin and participatory nature of institutional AI guidelines; comparative prevalence reported in open and closed responses.
Overreliance on GenAI CDS may lead to deskilling of clinicians, eroding judgment over time and increasing systemic vulnerability.
The paper cites theoretical risk and references limited longitudinal concerns; empirical longitudinal studies demonstrating deskilling are scarce per the paper’s stated evidence gaps.
Commercial structural biology services for routine solved folds may be commoditized, pushing firms toward complex validation, novel targets, or high‑value contract research.
Paper suggests this in 'Disruption of service markets' as a projected industry response; it is a strategic implication rather than an empirically demonstrated trend in the text.
Returns to AI investments may exhibit increasing returns to scale, reinforcing winner‑take‑most dynamics unless offset by platformization or open‑source diffusion.
Economic scenario reasoning on capital intensity and platform effects; no empirical calibration or econometric evidence provided.
Because feedbacks from capital and labor onto AI are weak, AI can grow rapidly and may lead to lock-in, concentration, and distributional risks that warrant monitoring and possible redistributive or competition policies.
Empirical finding of weak negative feedbacks to AI in estimated interaction coefficients combined with theoretical interpretation about growth and lock-in risks.
Job insecurity rises when FDI is short‑term, footloose, or concentrated in capital‑intensive extractive projects.
Conceptual arguments and empirical examples in the review linking investment temporariness and capital intensity to higher job instability; empirical evidence less comprehensive and context-specific.
Private governance and firm-level solutions (internal standards, bargaining with unions) may proliferate, but these can entrench firm-specific norms and increase market power asymmetries.
Conceptual argument drawing on governance and industrial organization literature; no empirical measurement of prevalence or market-power effects included.
Inadequate protections reduce public trust in mobile-AI services, which can slow diffusion and undercut the growth trajectories that policy narratives anticipate.
Inferred from stakeholder commentary and policy discourse combined with communication-rights theory; the paper does not present survey or adoption-rate data.
Low-wage and platform workers are particularly exposed to algorithmic management and surveillance, with potential downward pressure on wages, bargaining power, and job quality.
The paper's qualitative analysis of stakeholder comments and policy omissions, combined with literature-based inference about platform labor dynamics; no primary labor-market survey or quantitative wage data provided.
Soft‑law governance and growth-first narratives risk concentrating benefits (investment, productivity gains) while externalizing costs (privacy harms, biased decisioning) onto vulnerable populations, exacerbating inequality and reducing inclusive economic development.
Analytic inference from qualitative review of governance instruments and policy narratives combined with communications-ecology and political-economy reasoning; not based on quantitative economic measurement in the paper.
Uncertainty about long-run agentic behavior increases option value and downside risk of investing in agentic systems, which may raise discount rates and required returns.
Economic argument applying risk/return logic to agentic uncertainty; no quantitative empirical evidence provided.
Economic rents and advantages may accrue to agents who control large datasets, computing resources, and organizational processes that effectively integrate AI as a co-pilot, potentially increasing market concentration among AI providers.
Economic theory on scale economies and platform effects combined with observed industry patterns; reviewed literature provides conceptual arguments and case examples rather than broad empirical market-structure measurement.
Generative AI poses substitution risk for entry-level or routine cognitive work focused on generation or drafting without evaluative responsibility.
Task-based analyses and case studies indicating automation potential for routine generation tasks; empirical demonstrations of AI-produced drafts/outputs that could replace such work, but longer-run displacement evidence is limited.
Upfront integration and recurring governance costs mean smaller firms may face higher relative costs — potentially increasing scale advantages for larger incumbents.
Deployment case studies and cost reports indicating significant fixed integration and governance costs; inference to market structure is speculative.
Vendors offering integrated governed hyperautomation stacks may capture premium pricing and increase switching costs, potentially widening adoption gaps between large incumbents and SMEs.
Market-structure and competitive dynamics discussed theoretically in the Implications section; no market-share or pricing data provided.
There are risks that concentration of modeling capability around well-funded actors could create inequality in capture of downstream economic gains despite open data.
Risk analysis in the discussion section; argued qualitatively without empirical testing in the paper.
Exposure to AI and platform work produces psychosocial effects for workers, including increased job insecurity, stress, and changing task content in surviving occupations.
Surveys, qualitative case studies, and workplace studies summarized in the review reporting worker‑reported insecurity and stress; the review also highlights inconsistent measurement and limited systematic evidence on psychosocial outcomes.
Standardized, high-quality data will concentrate competition on modeling, compute, and algorithmic innovation, favoring actors with greater compute resources.
Economic argument presented in the discussion; not evaluated with empirical market data in the paper.
The paper is the first systematic integration of XAI-based predictive modeling with counterfactual policy simulation specifically targeted at sustainability-oriented HR (Green HRM).
Authors' novelty claim stating this combination is novel in the Green HRM literature; no systematic literature review evidence provided in the summary to independently verify primacy.
The paper likely includes ablation studies and standard metrics (task success rate, step-wise error, plan coherence) to isolate contributions of the two training stages and to evaluate performance.
Summary states these analyses as 'likely additional methods' (i.e., typical but not fully detailed in the abstract); no direct confirmation or results provided in the provided text.
This study represents the first attempt to conduct a comprehensive evaluation of artificial intelligence (AI) and its influence on job displacement based on the existing body of literature.
Author assertion in the paper; the excerpt provides no external verification (no citation of prior reviews/meta-analyses to justify the 'first attempt' claim).
We currently lack an understanding of how political parties perceive the potential impact AI has on employment, the role of regulations in protecting workers from AI-related job losses, and the importance of AI educational and training programs.
Statement of a literature/knowledge gap motivating the study (assertion by the authors; no empirical basis provided in the excerpt).
Observable firm-level and economy-wide moments—changes in spans of control, manager share of payroll, incidence of new tasks, employment growth, and shifts in the wage distribution—can be used to test the model's predictions.
Model-implied empirical identification strategy and suggested measurable moments in the paper's discussion/implications section (theoretical prediction, not an empirical test).
This study is the first systematic presentation of factual data describing employment outcomes of Russian university AI graduates.
Authors' stated novelty claim in the paper (asserted uniqueness of systematic institutional-level employment outcome data for Russian AI graduates).
Hybrid agency implies complementarity between GenAI and managerial/knowledge‑worker skills (curation, evaluation, coordination), potentially increasing returns to those skills while automating routine cognitive tasks—consistent with skill‑biased technological change.
Synthesis of recurring themes linking GenAI capabilities with managerial skill topics in the thematic clusters; positioned as an implication for labour demand and skill composition rather than an empirically tested effect.
Public investments in standards, verification infrastructure, and public-interest datasets can correct market failures and support trustworthy AI.
Policy recommendation informed by governance and public-good theory and examples from the literature; the claim is prescriptive and not validated by new empirical evidence within the paper.
Humans who configure and teach agents gain understanding and skills themselves — learning-by-teaching generates human capital accumulation endogenous to agent deployment (bidirectional scaffolding).
Qualitative, naturalistic observations and comparative documentation of users configuring/teaching agents during the one-month study; no randomized assignment or pre/post quantitative skill testing reported.