Evidence (2954 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Human Ai Collab
Remove filter
Nursery crops represent a niche market opportunity for automation, robotics, and engineering companies to invest R&D capital, particularly because operating environments are neither uniform nor protected from weather extremes.
Paper's market analysis/opinion about R&D opportunities in nursery automation; no market size or investment data provided in the excerpt.
Adoption of automation by nursery operations may help retain current workers and attract new employees.
Paper's proposed/anticipated effect of automation on workforce retention and attraction; presented as a potential benefit rather than demonstrated causal evidence in the excerpt.
In the AI era, sustainable competitive advantage is rooted not in the technology itself, but in an organization's fundamental capacity to learn.
Normative/conceptual conclusion drawn from the paper's theoretical framework (dynamic capabilities and absorptive capacity emphasis). No empirical evidence or longitudinal validation provided.
The framework provides leaders with a diagnostic tool for guiding transformation in the AI era.
Practical implication offered in the paper (proposed diagnostic framework). The paper does not report empirical trials, user testing, or validation of the tool.
The ultimate effect of AI is determined not by its technical specifications but by an organization's absorptive capacity and its ability to learn, integrate knowledge, and adapt.
Theoretical integration of dynamic capabilities and micro-foundations in the paper; conditional model proposed. The paper does not report empirical testing or sample data to validate this conditioning effect.
AI reshapes organizations by rewriting routines, shifting mental models (cognitive frameworks), and redirecting resources.
Conceptual delineation within the paper identifying three loci of AI impact (routines, mental models, resources). No empirical measures or sample size provided.
AI functions as a catalytic force that operates on an organization's foundational elements and actively reshapes how institutions function.
Theoretical claim and conceptual argument developed in the paper (framework-level assertion). No empirical testing or sample reported.
Designing AI systems that are transparent, ethical, and inclusive is important to support adoption among both tech-savvy and less technologically adept consumers.
Normative/recommendation derived from study findings and synthesis (authors' interpretation/recommendation based on empirical results and literature integration).
Embedding managerial control, ethical reasoning, and contextual evaluation in AI-assisted workflows minimizes effects of algorithmic bias and automation bias and enhances workforce confidence.
Theoretical assertion supported by conceptual argument and literature integration in the paper. No empirical test, experimental manipulation, or quantitative measurement provided.
Through continuous learning (including lifelong learning) and fostering a culture of innovation, businesses can use the full potential of GenAI, ensuring growth and efficiency and equipping employees with the technical skills needed in an AI-enhanced world.
Conceptual claim grounded in literature review and thematic analysis; empirical measures of business growth, efficiency, or workforce technical skill gains are not reported in the abstract.
Companies need to adopt a human-centric approach to GenAI implementation to empower employees and support clients.
Argument supported by literature review and conceptual analysis; additionally informed by analysis of tasks across occupations (Erasmus+ projects) and discussions with trainers/educators. No empirical evaluation of organizations that adopted this approach is reported in the abstract.
The study advocates that IT organizations should ensure comprehensive AI literacy among employees by integrating best practices from the industry.
Policy/recommendation made in the paper's conclusions; no empirical intervention or measured effect described in the excerpt.
Employees should actively utilize AI tools and models to enhance innovation and productivity within their respective roles.
Recommendation advanced by the authors; no outcome measures or experimental evidence provided in the excerpt to quantify the effect.
AI advancements have fundamentally altered the nature of work, shifting it from labor intensive processes to software-driven operations.
Stated claim in the paper's background; no specific empirical measure or result reported here.
Embedding games within broader DST ecosystems (market platforms, precision-agriculture systems, carbon accounting services) could unlock monetization routes (carbon markets, ecosystem service payments) and reduce transaction costs.
Argumentative synthesis grounded in examples of integration potential; few empirical studies have measured monetization outcomes or transaction cost reductions directly.
If GenAI materially speeds design iteration, firms could increase throughput, reduce time-to-market, or lower costs for certain design services, potentially expanding supply and putting downward pressure on prices for commoditized outputs.
Authors' implication based on qualitative reports of faster iteration in interviews; no empirical productivity or price data collected in the study.
GenAI appears to automate or accelerate routine, exploratory, and generative sub-tasks (early ideation, variant generation), while human designers retain evaluative judgment, contextualization, and final creative synthesis—indicating task-level complementarity rather than full substitution.
Authors' interpretation of interview data where students report GenAI speeding ideation and generating variants, combined with theoretical discussion; no quantitative task-time measures reported.
The program can reduce skill mismatches and increase effective labor supply in targeted sectors, altering relative demand for AI-complementary vs. AI-substitutable tasks.
Economic argument in paper (theoretical); no empirical tests or sample reported.
Better-aligned curricula can raise the productivity and employability of graduates, shifting returns to human capital and affecting wage distribution by skill.
Theoretical economic reasoning and program rationale presented in paper; no empirical causal evidence provided.
Advantages of the program include traceability, improved career-alignment and employability, audit readiness, and support for innovation through modelling and data analysis.
Paper lists these as intended advantages (asserted benefits); no empirical outcome data provided.
Regulation and workforce policy should be calibrated to interaction level: stronger oversight and validation for AI-augmented/automated systems and workforce policies (reskilling, credentialing) to manage transition to Human+ roles.
Policy recommendations based on the taxonomy and implications drawn from the four qualitative case studies and conceptual analysis.
Practitioners should combine the manufacturing operation tree with AI methods and real operational data to create validated, policy‑aware simulation tools that support economic decision making.
Practical guidance and proposed integration steps in the paper; presented as recommended practice rather than demonstrated case examples.
The proposed roadmap can produce simulations that are realistic, validated against industry data, and useful for decision makers—supporting agility, resilience, and data‑driven planning.
Conceptual roadmap and recommendations in the paper; no empirical demonstrations or validation studies included.
Functional and instrumental value of AI systems can speed organizational adoption via increased trust, implying economic importance of demonstrable productivity gains and clear ROI.
Interpretation/implication drawn from the study's empirical finding that functional/instrumental values increase initial trust and that trust positively affects adoption; this is an inference rather than a directly tested macroeconomic effect in the paper.
Policy instruments such as open-data mandates, compute-sharing incentives, and conditionality in R&D funding can help ensure equitable validation and local engagement in climate-AI development.
Policy recommendations grounded in normative analysis and analogies to existing public-good interventions; no empirical evaluation of these specific instruments provided in the paper.
Economists should prioritize research to quantify returns to investments in CDPI versus private compute, estimate economic costs of maladaptation from biased AI outputs, and design incentive-compatible mechanisms for data sharing and co-production.
Research agenda and recommendations presented by the authors; this is a suggested empirical/theoretical program rather than a tested result.
Establishing Climate Digital Public Infrastructure (CDPI)—shared, interoperable data and compute resources, standards, and governance—can democratize access and reduce inequities in climate-AI.
Policy proposal and normative argument drawing analogies to public goods (observational networks, satellites); no empirical evaluation of CDPI implementations presented.
Shifting from a model-centric to a data-centric approach (improving data quality, representativeness, and governance) will mitigate the harms caused by current infrastructural asymmetries.
Normative recommendation grounded in conceptual arguments and illustrative examples; not supported by empirical interventions or randomized/controlled comparisons in the paper.
Policy and governance should preserve worker agency (participatory design, transparency, clear accountability) and support training and institutional mechanisms (collective bargaining, workplace representation) to negotiate value-sharing from AI productivity gains.
Normative policy recommendation by authors derived from qualitative findings (workshops with 15 UX designers) that highlighted agency and distributional concerns.
Clinic-aware designs and reliable validation can enable clearer evidence of value, facilitating payer reimbursement, value-based care contracts, and new pricing models for AI-enabled medical devices and services.
Policy and reimbursement implications discussed by clinicians and industry participants during the workshop and summarized in the workshop report (NSF workshop, Sept 26–27, 2024).
Scalable validation ecosystems and continuous objective measures reduce information asymmetries between developers, clinicians, and payers, lowering commercialization and regulatory risk, which raises private returns and speeds adoption.
Economic implications and causal argument set out in the workshop summary based on expert judgement and theory discussed at the NSF workshop (Sept 26–27, 2024).
Procedural material modeling (Perlin noise) is a promising technique for robust policy learning and can reduce the need for extensive real-world data collection.
Implication stated in the paper's discussion: authors suggest procedural variation via Perlin noise aided robust policy learning and improved sim-to-real transfer; empirical quantification of reduced real data needs is not provided in the summary.
Perception providing the material's location inside the vial was used to guide the agent.
Paper summary states perception input (material location) was provided to the agent; sensing modality and accuracy/details of perception are not specified.
Privacy-preserving accountability logs can support ex post adjudication, insurance products, and reputational dynamics, reducing moral hazard.
Conceptual claim: protocol includes privacy-minded logs; paper argues potential for post-hoc review and insurance. No empirical tests of adjudication or insurance products provided.
Observable capability and coordination-risk signals enable more granular pricing, risk-based contracts, and differentiated service tiers (e.g., primary-only vs primary+auditor).
Policy/economic implication argued conceptually in the paper; no empirical pricing experiments or market data provided.
High capability profiles for some tasks will shift delegation toward agents (automation) and reallocate human labor toward supervision, auditing, and low-win-rate tasks.
Projection based on capability profiles and economic reasoning in the paper; presented as implications rather than empirically demonstrated. No labor-market empirical data provided.
Better matching of tasks to agent competencies improves allocative efficiency across task markets.
Theoretical/economic claim derived from capability profiles enabling improved matching; no empirical market experiments or measurements reported in the summary (field experiments suggested as future work).
Task-aware signals reduce search and screening costs by acting like quality/reliability metrics in delegation markets.
Economic implication argued conceptually in the paper: task-conditioned capability and coordination-risk signals function as observable quality metrics, reducing transaction costs. This is a theoretical argument; no empirical market-level test reported.
There is a strong complementarity between AI investments and organizational change: firms with better leadership, cross-functional processes, and data practices capture disproportionate benefits, implying increasing returns to scale and potential winner-take-most dynamics.
Authors' theoretical inference from cross-case patterns and economic reasoning; supported qualitatively by cases showing disproportionate gains in better-managed firms.
Firms that can credibly supply explainability and governance may capture a premium—explainability can be a competitive differentiator and a signal of quality and lower regulatory risk.
Conceptual synthesis and market-structure arguments from the reviewed literature; reviewed studies provide theoretical and some qualitative support but not systematic market-price estimates.
Policy should incentivize transparency, auditability, standards for human–AI interfaces, workforce development, certification of teaming practices, and liability frameworks to ensure accountability and equitable outcomes.
Normative recommendation based on ethical and governance considerations synthesized in the paper; not supported by policy evaluation evidence within the paper.
Orchestrating attention and interrogation through interface and workflow design helps manage what humans and AI focus on and how they challenge/verify each other, thereby reducing errors and misuse.
Prescriptive claim grounded in human factors and HCI literature synthesized by the authors; the paper suggests these mechanisms but does not report empirical trials demonstrating effects.
Design principles (define goals/constraints, partition roles, orchestrate attention/interrogation, build knowledge infrastructures, continuous training/evaluation) are necessary design levers to build high-performing, transparent, trustworthy, and equitable Human–AI teams.
Prescriptive synthesis from reviewed literatures and conceptual modeling; these principles are proposed heuristics rather than empirically validated interventions in the paper.
Firms that design processes to preserve human diversity and elicit diverse AI outputs may capture greater productivity gains, increasing returns to organizational capability rather than to raw model access.
Theoretical implication and prescriptive recommendation based on observed homogenization; no direct causal firm-level evidence presented, inference based on economic reasoning.
Investments to build trust in AI (transparency, reliability, training) are likely to have positive returns via higher adoption rates and realized AI benefits.
This is presented as an implication derived from observed positive associations between trust and outcomes; the study did not conduct cost–benefit or longitudinal causal tests of such investments in the reported analyses.
Practical levers to increase AI trust include transparency of AI models, demonstrated reliability, and manager-focused AI literacy/training.
Paper proposes these levers based on study findings and discussion (recommendations), but they were not tested experimentally in the reported cross-sectional survey.
A stronger data-driven decision culture that stems from AI trust yields better operational and academic outcomes.
Study reports positive associations between AI trust → data-driven culture → operational and academic outcomes in survey-based analyses; however, the summary does not specify which operational/academic metrics were measured or sample size.
The dissertation implies policy interventions (subsidies, tax incentives, training and integration assistance) can accelerate welfare-improving AI adoption by helping firms overcome the early negative part of the U-shaped profit profile.
Policy implication derived from the theoretical U-shaped profit relationship and model interpretation; not supported by randomized or quasi-experimental policy evaluation in the provided summary.
Vendors that embed robust cognitive interlocks into development platforms can command premium pricing by reducing downstream risk; verification features may become a competitive moat.
Market-structure and product-differentiation reasoning in the paper; no market data, pricing studies, or competitive analyses presented.
Human verification (and automated verification infrastructure) becomes the limiting factor and a scarce complement to AI generation, raising demand and wages for verification expertise and tooling.
Theoretical labor-market analysis and complementarity argument in the paper; no labor market data or econometric estimates provided.