Evidence (3470 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Org Design
Remove filter
This lack of focus creates uncertainty about whether regulatory technology helps legitimate economic recovery or instead strengthens exclusion and informality.
Interpretive observation from gaps identified in the reviewed literature; no empirical resolution provided.
Differences in human intervention effectiveness across escalation types are partly explained by variation in workers' post-escalation intervention effort.
Observed correlations (and subgroup comparisons) in the randomized experiment showing that measures of post-escalation effort (e.g., message counts, share of chat rounds, proactivity) vary across escalation types and relate to outcome differences.
Artificial intelligence (AI) is rapidly reshaping knowledge-intensive work by automating, augmenting, and reconfiguring core professional activities.
Paper asserts this as a motivating observation based on prior literature and descriptive claims; no original empirical sample or quantified data reported.
Metis can be subdivided into 'constitutive metis' (knowledge destroyed by the act of formalization) and 'operational metis' (system-specific familiarity that automation can progressively absorb).
Conceptual taxonomy proposed by the authors; definitions and distinctions are theoretical and illustrated via argumentation and prior literature rather than quantified empirical measurement.
Augmented work agency is shaped by whether applications are generative or non-generative, by employees' experiences of anxiety and technostress, and by micro-politics through which teams negotiate AI use and AI ethics.
Thematic findings from semistructured interviews (28 participants) and document review identifying these factors as shaping agency in practice.
The analysis uncovers three central tensions shaping AI-mediated work: autonomy versus orchestration; capability versus dependency; and experimentation versus ethics.
Recurring themes identified through qualitative interviews (28 participants) and document review; interpretive synthesis presented in findings.
AI integration transforms managerial practices, workforce identities and organizational coordination.
Thematic and interpretive analysis of semistructured interviews with 28 managers/professionals across 12 organizations and review of organizational documents.
These AIECI benefits were contingent on complementary conditions—particularly data quality, governance, managerial interpretation, and integration of intelligence outputs into operating decisions.
Cross-case pattern-matching across five analytical dimensions (intelligence source, AI mechanism, decision domain, economic implication, boundary condition) identifying recurring contingencies in the four firms' archival evidence.
The dominant explanation for the gap locates it in model capability; instead, software-engineering capability emerges from a model-harness-environment system where a runtime substrate (the harness) mediates how an agent observes a project, acts on it, receives feedback, and establishes that a change is complete.
Conceptual argument and reframing presented in the paper (abstract). The paper formalizes this perspective rather than reporting a large-scale empirical test in the abstract.
The research challenges for this vision stem from a broader flexibility–robustness tension that requires moving beyond the on-the-fly paradigm to navigate effectively.
Analytical claim in paper identifying a design trade-off (flexibility vs. robustness) as the core challenge motivating the proposed shift; no empirical demonstration provided.
Generative AI lowers barriers to solo entrepreneurship while reinforcing team-based advantages.
Synthesis of the observed patterns in the Product Hunt data: sharp increase in solo launches after ChatGPT-3.5 (barrier lowering) combined with persistent team dominance among top-quality outcomes (reinforcing team advantages).
Message for AI alignment: smooth scoring-based oversight cannot elicit truthful reports from a strategic agent; sharp thresholds (step functions) are the calibration-preserving design.
Synthesis of the paper's theoretical impossibility and constructive results applied to AI oversight setting (argument plus the step-function constructive escape).
Public discussion of generative AI in accounting swings between the allure of full automation and job-displacement anxiety, yet the most immediate reality in organizations is human + AI work.
Paper's background/intro synthesizing recent research and practitioner commentary (2023–2025); conceptual observation rather than empirical test.
Integrating Generative AI into agile development processes has potential benefits and limitations for planning efficiency.
High-level conclusion based on the controlled experiment with GitLab Duo and qualitative participant feedback discussed in the paper.
Two minimal extension policies, each derived from the observation, close the regime along orthogonal axes: a sample-size-aware static rule (Periodic-with-floor) closes the granularity-failure case, while a history-conditioned suspicion-escalation policy closes the coverage-failure case for the naive Drift strategy — and neither closes both, exactly as the observation predicts.
Design and analysis of two auditor policies in the paper; theoretical argument from Observation 1 and supporting simulation results illustrating which failure modes each policy addresses.
The strategic interplay between antitrust regulation and vertical integration materially influences the evolutionary transitions of the computing power ecosystem.
Core focus of the paper's tripartite evolutionary game model which explicitly models government regulators, incumbents, and downstream innovators and analyzes resulting equilibria and transitions (method: theoretical evolutionary game + analytical derivation).
The evolution of the AI computing power innovation ecosystem manifests distinct stage-based progressions and threshold-driven bifurcation characteristics, potentially transitioning from an initial 'natural monopoly and passive dependence' state through intermediary states (e.g., 'comfort zone trap' or 'regulatory stalemate') toward a mature configuration of 'co-opetition and endogenous growth.'
Derived from the paper's tripartite evolutionary game model and analytical derivation of evolutionarily stable strategies, with supporting numerical simulations exploring parametric sensitivities (method: theoretical evolutionary game + numerical simulation).
The computing power industry is undergoing a paradigm shift from traditional linear supply chains toward complex, interdependent innovation ecosystems driven by the rapid proliferation of generative artificial intelligence.
Conceptual claim presented in the paper's introduction/motivation; supported by the paper's theoretical framing and literature-based motivation rather than empirical data (method: narrative/theoretical framing).
Empirical analysis of cases demonstrates that diverse, and often non-ethics-related, levers motivate organizations to abandon AI development.
Analysis of cases drawn from the AI incident database and practitioner survey contrasted with the taxonomy from the scoping review; specific counts/effect measures not provided in the summary.
Three sovereignty boundaries determine whether AI remains an amplifier within a human-governed system or becomes a de facto control center: irreversible decision authority, physical resource mobilization authority, and self-expansion authority.
Conceptual model element in the paper; identification and definition of three 'sovereignty boundaries' used to analyze governance risks.
The paper formalizes this claim through decision-energy density: the rate-weighted capacity of a node to generate, evaluate, select, and execute consequential decisions.
Formal/modeling claim — the paper defines and uses a formal metric called 'decision-energy density' within its theoretical framework.
AI capabilities can be copied, invoked, embedded in workflows, and scaled across institutions at low marginal cost.
Descriptive claim about AI technology characteristics made in the paper; supported by conceptual argument and examples rather than quantified empirical data.
Earlier high-risk technologies were slowed by capital intensity, physical bottlenecks, organizational inertia, and specialized supply chains.
Historical/analytic claim presented as background context in the paper; supported by conceptual comparison rather than a specific empirical study.
Scientific institutions, distinctively, manufacture legitimate judgment, so they do not merely adapt to AI; they compete with it for the same functional role.
Conceptual/theoretical assertion in the paper describing institutional roles; no empirical data or sample size provided in the excerpt.
No single governance setting dominates across all contexts; moderate governance becomes increasingly competitive as the learner accumulates experience within the governed action space.
Empirical finding reported from experiments with the contextual-bandit learner operating under different governance constraints and learning over time; comparative performance over learning horizon described in the paper. Sample size / trial counts not provided in the excerpt.
This workload-buffering effect (governance improving performance while reducing fatigue) contradicts the usual framing of governance as pure overhead.
Interpretation and comparison of empirical manufacturing results against prior framing in literature (qualitative claim within the paper). No sample size provided.
Governance is not a binary switch but a tunable design variable: tighter constraints predictably convert autonomous AI assignments into supervised collaborations, with domain-specific costs and benefits.
Empirical finding reported from experiments using the HAAS benchmark across the two domains (software engineering and manufacturing); qualitative and/or quantitative comparisons of allocations under varying governance constraints. Paper does not state sample size in the provided text.
Whether the futures these configurations help create remain governable and worth inhabiting will depend on leaders who can see, early enough, where and how consequential decisions are actually being shaped.
Normative/prognostic claim linking future governability to leaders' detection capabilities (conceptual; no empirical test provided in the excerpt).
These configurations will shape how power, responsibility, and trust are distributed in organizational life.
Theoretical/prognostic claim in the paper linking configurations to distribution of power, responsibility, and trust (no empirical quantification in the excerpt).
Augmentation is bounded rather than linear (i.e., human-AI augmentation shows diminishing or negative returns past a balanced zone).
Synthesis of interview themes across 34 cases producing the bounded-augmentation / curvilinear conceptualization.
Mediators such as trust, cohesion and accountability are reshaped when AI-generated contributions enter collaboration.
Thematic evidence from interviews indicating changes in trust, cohesion and accountability dynamics associated with the introduction of AI outputs into team collaboration.
Social (leadership engagement, trust, ownership, mediation and alignment) and technical (automation, creation, reliability, distraction and integration) subsystems combine to enable or erode team effectiveness, summarized in an e-leadership–AI orientation matrix.
Analytic synthesis from thematic coding (Gioia-informed) of interview data producing a conceptual matrix mapping social and technical factors to outcomes.
Analysis identifies a curvilinear pattern of bounded augmentation, where effectiveness peaks in a zone of balanced use but declines under under-use and over-reliance.
Thematic (Gioia-informed) analysis of 34 semi-structured interviews with project managers across five UK industries; pattern emerges from cross-case coding and synthesis.
Firms may continue to exist as legal and physical entities, but their coordinating function will be displaced as they become data nodes within regionally governed AI infrastructure.
Predictive/conceptual claim within the framework; no empirical sample reported in the excerpt and presented as a theoretical outcome of Interface Internalization.
The Structural Dissolution Framework challenges the Coasian view that organizational boundaries are determined by transaction cost minimization, arguing that AI makes such boundaries economically obsolete.
Theoretical critique of transaction-cost-based explanations for firm boundaries presented in the paper; argumentative and conceptual rather than supported by empirical tests in the provided summary.
Regional data sovereignty entities will emerge as organizational forms that replace the coordinating role of firms and markets.
Normative/predictive claim within the paper's framework arguing for new organizational forms (regional data sovereignty entities); illustrated conceptually (e.g., through resource-dependent regional economies) rather than empirically tested in the provided text.
Domain-specific data refinement infrastructure will become the new basis of positional control in industries.
Theoretical claim in the framework asserting a shift in positional control to data refinement infrastructure; presented as a predicted structural outcome rather than supported by empirical data in the provided text.
AI adoption moves value creation away from physical resources and human collaboration toward continuous token flows produced through data refinement loops.
Theoretical/analytical claim within the Structural Dissolution Framework and illustrative discussion; no empirical quantification provided in the text excerpt.
The mechanism driving this restructuring is 'Interface Internalization', through which inter-agent coordination is absorbed into intra-system computation.
Conceptual mechanism defined and argued in the paper; presented as the central theoretical mechanism rather than as an empirically validated finding.
AI dissolves the boundaries that once separated firms, markets, experts, and consumers by internalizing human multimodal interfaces (language, vision, and behavioral data) into computational systems.
Theoretical argument and conceptual framework introduced in the paper (Structural Dissolution Framework); no empirical sample or quantitative analysis reported for this claim in the text provided.
This hybrid Make governance form has qualitatively different economics, capability requirements, and governance structures than pre-AI in-house development.
Paper's conceptual comparison between pre-AI hierarchy and post-AI hybrid Make governance (theoretical reasoning and examples; no empirical quantification).
AI reshapes seven canonical decision determinants for make-or-buy choices: cost, strategic differentiation, asset specificity, vendor lock-in, time-to-market, quality and compliance, and organizational capability.
Paper's factor-level conceptual analysis enumerating and discussing seven determinants (theoretical synthesis rather than empirical measurement).
Demographic characteristics intersect with AI exposure—i.e., exposure varies by demographic groups.
Paper reports that it examines how demographic characteristics intersect with exposure based on recent empirical studies; no demographic breakdowns or sample sizes provided in the abstract.
Recent studies combine task-level exposure metrics with employment and usage data to assess AI exposure and impacts.
Paper notes that it draws on studies that use task-level exposure metrics alongside employment and usage data; methodological claim rather than a quantitative result.
Generative large language models (LLMs) present organizations with a transformative technology whose labor market implications remain nascent yet consequential.
Statement in paper synthesizing emerging empirical research; no specific study, method, or sample size reported in the abstract.
The adoption of AI in Israel constitutes a systemic transformation of employment relations, necessitating doctrinal adaptation and institutional reform to keep the labor market aligned with foundational legal principles.
Synthesis and conclusion from the paper's combined legal and empirical analysis; presented as the author's overarching interpretive claim rather than as a specific quantified finding.
Within the public sector, there is an emerging policy trend to incorporate AI considerations into workforce planning, including examining whether human positions may be substituted by technological solutions prior to recruiting new employees.
Paper reports an observed policy trend in public-sector workforce planning; specific policy documents, jurisdictions, or counts not provided in the excerpt.
The study establishes statistically significant relationships between organizational AI adoption and compensation dynamics.
Econometric estimates (difference-in-differences and propensity score matched comparisons) using the combined datasets listed in the paper and controlling for industry, firm size, geography, occupation characteristics, and macroeconomic variables.
The study establishes statistically significant relationships between organizational AI adoption and changes in occupational structures.
Same econometric approach (difference-in-differences and propensity score matching) applied to combined datasets (Anthropic Economic Index, Census Business Trends and Outlook Survey, Federal Reserve regional surveys, labor market analytics), with controls for industry, firm size, location, occupation-level characteristics, and macroeconomic environment.
The study establishes statistically significant relationships between organizational AI adoption and changes in employment patterns in the United States during 2022–2025.
Econometric analysis using multiple large-scale data sources (Anthropic Economic Index, U.S. Census Bureau Business Trends and Outlook Survey, Federal Reserve regional surveys, labor market analytics) and methods described as difference-in-differences estimation and propensity score matching controlling for industry (NAICS 2-digit), firm size, geography, occupation characteristics, and macro conditions.