Evidence (2954 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Human Ai Collab
Remove filter
There are research opportunities to measure returns to 'teaching' (causal impact of configuring agents on human skill accumulation and earnings) and to model agent-platform ecosystems with network effects, spillovers, and endogenous quality hierarchies.
Author-stated research agenda and proposed empirical questions derived from the observed phenomena; not empirical results but recommended directions.
Future research should quantify calibration and skill of LLMs over longer horizons, develop ensembles that pair LLMs with domain specialists, and expand temporally grounded benchmarks across different conflict types.
Authors' stated research agenda and limitations: calls for longer-horizon calibration studies and broader benchmarking derived from observed domain heterogeneity and the scope of the present snapshot.
Recommended research priorities include hierarchical/temporal-decomposition methods, continual learning, robust adaptation to non-stationarity, and causal/structured reasoning to handle multi-factor interactions.
Paper discussion linking observed failure modes to methodological gaps and proposing research directions to address limitations; these are recommendations rather than experimentally validated claims.
Regulators and payers will require clinical validation, safety guarantees, and clear liability frameworks for human–AI shared decision-making before widescale deployment.
Policy implication stated in the paper's discussion section based on general regulatory considerations; not an empirical result from the study.
Broader implication for AI economics: firm-level attention allocation, nonlinearities, thresholds, and governance/incentive design should be incorporated into economic models of AI adoption because AI's effects on workers and CSR are not monotonic and depend on industry and governance.
Synthesis of empirical findings (inverted U and moderator effects) and theoretical argument; recommended direction for future modeling and empirical work stated in the paper.
Policy does not predict individuals' intent to increase usage but functions as a marker of maturity—formalizing successful diffusion by Enthusiasts while acting as a gateway the Cautious have yet to reach.
Analysis of a policy variable within the survey dataset (N=147) showing no predictive relationship with individual intent to increase AI use, but an association between presence of policy and indicators of organizational adoption/maturity and differential reach into archetype groups.
The study recommends iterative prompt refinement, integration with adaptive learning models, and further exploration of autonomous self-prompting mechanisms.
Concluding recommendations derived from the study's results and interpretation; presented as future directions rather than empirically tested interventions within this study.
Future research should explore sector-specific AI adoption challenges and long-term workforce adaptation strategies.
Author recommendation presented in the paper's discussion/future work section of the summary.
Researchers should combine qualitative studies with administrative/matched employer–employee data and experimental/quasi-experimental designs (pilot rollouts, staggered adoption) to identify causal effects of AI on tasks, productivity, and wages.
Methodological recommendation by authors based on limitations of their qualitative study (15 UX designers) and the need to quantify observed phenomena; not an empirical claim tested in the paper.
Future research priorities include obtaining causal estimates (e.g., field experiments) of productivity gains from trust-mediated AI adoption and conducting cost–benefit analyses of trust-building interventions.
Study’s stated research agenda/recommendations; not an empirical claim but a recommended direction for follow-up research.
Firms investing in human–AI co‑creation infrastructure may gain a resilience premium; policymakers and standards bodies should consider governance frameworks for adaptive algorithmic systems balancing responsiveness with oversight.
Policy and investment implication inferred from empirical results on resilience and detection performance; direct evidence of market valuation or policy outcomes is not reported.
Greater reliance on algorithmic co‑creation shifts labor demand toward roles skilled in model oversight, interpretive judgment, and human‑machine interaction rather than purely manual segmentation tasks.
Inference from the operationalization of human–AI co‑creation via the Canvas and observed changes in practitioner workflows during 6‑month ethnography (n = 23); workforce composition effects are not empirically measured at scale in the study.
A ~90% reduction in strategic planning cycle time indicates lower managerial coordination costs and faster reallocation of marketing and R&D budgets.
Inference from measured reduction in planning cycle length (~90%) observed in the study (see ethnography/system logs); direct measures of coordination costs and budget reallocation outcomes are not reported in the summary.
Algorithmic Canvas–enabled autopoietic STP increases firms' ability to adapt endogenously to shocks, implying higher realized productivity in volatile markets and lower deadweight losses from mis‑targeting.
Inference drawn from empirical findings on resilience and detection performance (44% greater resilience, improved signal detection) and theoretical reasoning about dynamic capabilities; productivity and deadweight loss are not directly measured in the reported empirical results.
Economic evaluations of AI adoption should include psychological and human-capital externalities (effects on self-efficacy, skill depreciation, job satisfaction) to fully account for welfare and productivity dynamics.
Argument grounded in experimental and survey findings showing psychological impacts of AI-use mode; general recommendation for research and evaluation rather than an empirical finding.
Improved alignment can reduce harms from misinterpretation (incorrect decisions, misinformation), lowering downstream liability and reputational risk for vendors and customers.
Paper's safety and externalities discussion argues this as a likely consequence; the claim is theoretical and not supported by empirical incident data in the paper.
Providers may charge a premium for alignment-enabled API tiers or incorporate C.A.P. into enterprise plans because of additional compute per interaction, affecting pricing and unit economics.
Paper's pricing and costs discussion predicts potential monetization strategies and pricing experiments (A/B pricing, willingness-to-pay studies) but does not report market data.
C.A.P. has potential economic effects: it can reduce time lost to misinterpretation, thereby increasing effective throughput and productivity, though net gains depend on trade-offs with pre-processing overhead.
Economic implications section provides conceptual cost–benefit arguments and recommends pilot measurements (time saved, reduced human review cost) but provides no empirical economic measurement.
C.A.P. shifts interactions from one-way command-execution to two-way, partnership-style collaboration, increasing perceived partnerliness.
Theoretical argument drawing on cognitive science and Common Ground theory and proposed human-evaluation measures (satisfaction, perceived collaboration); no empirical human-subject results reported.
C.A.P. improves long-term and dynamic dialogue alignment and reduces off-topic or mechanically incorrect responses.
Main argument of the paper based on the combined functions (expansion, weighted retrieval, alignment verification, clarification); the paper provides conceptual/theoretical justification but does not report large-scale empirical results.
Public archives of prompts and commits accelerate diffusion by lowering search/learning costs and enabling replication, thereby increasing adoption speed and lowering entry barriers.
Paper's asserted implication based on the existence of public artifacts and general reasoning about knowledge diffusion; this is an interpretive claim rather than an experimentally validated finding (argumentative, extrapolative).
Public investment in open environments, robotics testbeds, and safety research can reduce concentration risks and externalities and democratize access to embodied AI research.
Policy recommendation based on anticipated strategic importance of shared infrastructure; not empirically validated here.
Value in the AI ecosystem may shift from passive text/image corpora toward rich interaction datasets and simulated/real environments; ownership and control of simulation platforms and testbeds could become strategically important assets.
Economic and strategic inference from the proposed technical emphasis on embodied/interaction learning; no supporting market data in the paper.
Increased sample efficiency and transfer will reduce compute and data costs, lowering barriers to entry for firms and broadening feasible AI applications.
Economic argument connecting technical metrics to cost and market effects; not empirically demonstrated in the paper.
More autonomous learners that can self-experiment and learn from observation will lower deployment costs for adaptable agents and accelerate automation across more occupations, especially embodied and social tasks.
Economic reasoning and projection based on expected technical improvements; speculative without empirical economic analysis in the paper.
Cross-cutting elements (hierarchical organization, curriculum/bootstrapping, intrinsic motivation, uncertainty estimation, memory consolidation, neuromodulatory analogs) are important for improving learning in the proposed architecture.
Conceptual recommendation based on known mechanisms from neuroscience and machine learning literature; not validated in the paper.
System M (meta-control) should generate internal signals that decide when to prioritize A vs B, allocate attention, consolidate memory, and trade off uncertainty, novelty, expected information value, and effort costs.
Design proposal motivated by biological meta-control and decision theories; no empirical tests presented.
System B (action-driven learning) should learn through intervention, consequences, and trial-and-error, using active exploration, reinforcement learning, and hierarchical/skill learning.
Architectural proposal aligning with RL and hierarchical learning literature; theoretical description without experimental evidence.
System A (observation-driven learning) should build models of others, social contingencies, and passive affordances through imitation, self-supervised representation learning, and inverse RL.
Architectural specification and mapping to existing algorithms (imitation, SSL, inverse RL); no empirical validation provided.
Integrating observation-driven and action-driven learning with meta-control and evolutionary/developmental priors should improve sample efficiency, robustness, transfer, and lifelong adaptation.
Conceptual argument and proposed integration of methods; suggested but untested experimentally in the paper.
A biologically inspired three-part architecture (System A: observation-driven learning; System B: action-driven learning; System M: internally generated meta-control) can address these limitations.
Theoretical proposal and analogy to biological systems; no empirical validation reported in the paper.
Embedding LLM coaching tools in platforms (employee onboarding, customer support, peer-support communities) could raise overall conversational quality by improving expressive outcomes rather than only informational accuracy.
Authors' implication drawn from trial results showing improved alignment to empathic norms after personalized coaching; no field deployment evidence provided in the paper.
LLM-driven personalized coaching can cheaply scale soft-skill training (empathy expression) that would otherwise require costly human trainers, suggesting a high-return application of AI in workforce development.
Implication drawn from observed efficacy of brief automated coaching in the trial and the scalable nature of LLM deployment; no direct economic field trial provided in the paper.
Barriers to entry may be larger for tacit‑capability‑driven systems than for rule‑based systems, potentially increasing market concentration.
Economic argument linking tacit capabilities to requirements for large data, compute, and specialized training dynamics; speculative and not empirically tested in the paper.
Platform design that implements robust context‑sensitive memory gating (fine‑grained policy engines, provenance, auditable suppression logic) can reduce downstream harms and may become a competitive product differentiation.
Policy and product recommendation based on BenchPreS results; the paper offers this as a plausible solution path but does not provide experimental validation of such platform mechanisms.
Improved predictive accuracy from AI tools can potentially improve screening, promotion, and retention decisions and thereby increase firm productivity by better allocating human capital.
Framing/implication in the paper: authors argue improved measurement and prediction could plausibly enhance managerial decision quality; this is presented as an implication rather than an empirically tested result within the study.
Fee-for-service payment structures may not reward efficiency gains from AI; value-based payment or shared-savings models are better aligned to incentivize adoption that reduces total cost and improves outcomes.
Health policy and reimbursement literature synthesizing incentives under different payment models; limited empirical testing of reimbursement models for AI-assisted services.
Effective human–AI collaboration will shift task content toward complementary activities (supervision, interpretation, creative/problem-solving), increasing demand for these complementary skills and potentially raising skill premia for workers who actualize AI affordances.
Theoretical prediction grounded in complementarity arguments and affordance actualization; no empirical sample or quantification provided.
Productivity gains from AI depend not only on the technology's capabilities but on organizational adaptation and successful affordance actualization; therefore investments in supportive strategy and mentoring can increase the fraction of potential AI productivity realized.
Theoretical implication derived from integrating AST and AAT literatures; recommended for empirical testing but not empirically demonstrated in the paper.
Strategic innovation backing (organizational investments, resource allocation, governance, and incentives) enables experimentation and scaling of human–AI work and thereby increases realized returns to AI investments.
Theoretical proposition based on literature integration and normative argument; no empirical sample or original data presented.
Because coordination costs could rise more slowly with team size under AI mediation, teams can scale and reorganize more easily (scalability effect).
Theoretical framework describing how lowered coordination frictions map to scaling properties; supported by illustrative scenarios but no empirical data or simulation results.
AI mediation can increase inclusion by enabling greater participation of non-native speakers and workers located in more geographies and roles.
Conceptual argument and examples suggesting reduced language/modality frictions expand feasible participation; no empirical estimates or trials presented.
AI-mediated coordination can produce productivity gains through faster, less error-prone coordination and reduced rework.
Illustrative cases and theoretical linkage between mediation functions (translation, intent-alignment, execution) and productivity outcomes; no quantification or empirical testing in the paper.
By reducing dependence on a shared human language, an AI mediation layer has the potential to lower coordination costs, increase productivity and inclusion, and enable scalable global collaboration.
Theoretical framework and illustrative scenarios mapping language-mediation capabilities to coordination costs and organizational outcomes; no empirical estimates or sample data provided.
AI technologies — notably multilingual language models, multimodal systems, and autonomous agents — can function as a “universal collaboration layer” that mediates communication, aligns intent, and coordinates execution across linguistically and culturally diverse teams.
Paper's primary approach is conceptual/theoretical: synthesis of AI capabilities mapped to coordination functions and illustrative case examples. No empirical or experimental sample; no large-scale data reported.
Policy interventions that promote transparency, standardized feedback channels, auditability, and training for oversight roles can improve trust calibration and economic returns to AI investments.
Policy recommendation based on synthesis of interview findings (N=40) regarding enablers of trust calibration and theoretical extension to expected economic impacts; this is a prescriptive inference rather than an empirically tested policy outcome in the study.
Economic assessments of ecological AI should go beyond model accuracy to measure conservation outcomes, cost‑effectiveness, and policy impact; new metrics and impact evaluation methods are important for funding decisions.
Evaluation-and-measurement recommendation in the paper based on limitations of benchmark-focused evaluation observed in the collection (methodological recommendation).
There is an evolution from task‑specific automation toward systems that incorporate ecological domain knowledge, robustness to ecological heterogeneity, and evaluation on applied conservation objectives.
Evolution-of-approach observation based on trends reported across the papers in the collection (comparative description of earlier vs newer works).
AI-adopting firms exhibit higher productivity and higher market value after adoption.
Estimates showing increases in productivity (e.g., TFP measures) and market-value measures (e.g., market capitalization or Tobin's Q) for adopters relative to nonadopters using the stacked diff-in-diff design.
Post-adoption patents include more claims (i.e., are broader/more detailed) for AI-adopting firms.
Patent-level analysis using number of claims per patent as outcome in the stacked diff-in-diff framework.